Provide directory listings using API Gateway as a proxy to S3 - amazon-web-services

I'm trying to host a static site through S3 and API Gateway (using S3 directly isn't an option due to security policy). One of the pages runs a client-side script to pull back a set of files from a specific folder on the server. I've set up the hosting following the Amazon tutorial.
For this to run, my script needs to be able to obtain the list of files for a specific folder.
If I was hosting the site on my own server using Apache, I could rely on the directory listing feature, where a GET on a folder with no index.html returns a file list. The tutorial suggests that this should be possible, but I can't seem to get it to work. If I submit a request to a particular {prefix}/{identifier}, I can retrieve the specific file, but sending a request to {prefix}/ returns an error.
Is there a way I can replicate directory listings so my Javascript can pull it down and read it, or is the only solution to write a server-side API in Lambda?

Related

Configuring Google Cloud Load Balancer path rules

I'm trying to configure the Google Cloud loadbalancer to do the following:
I have a website running on a Wordpress machine in a VM instance which I want users to access when they enter outairnet.com.
And I have a separate html file that I want users to access when they access outairnet.com/map.
WP is running on a compute engine VM, connected to a VM instance and to a backend service. The seperate html file is on a service bucket, connected to a backend bucket.
I've triedd to configure a very simple path forwarding rule, which made sense to me. But it just adds up to anyone trying to access outairnet.com/* gets to the WP (which is fine)
but accessing outairnet.com/map doesn't point to the storage bucket with the html file, however accessing outairnet.com/index.html does point to the separate html file.
My LB config looks like this.
I think I'm on to the problem but still can't solve it.
it looks like google console adds a /* rule even when I try to delete it.
so its a /* path rule that catches everything despite having a more specific rule like /mypath/* in addition.
but after removing it is just readded automatically for some reason. why?
It's possible - there are a few steps involved such as creating a bucket with your static page, adding it as a backend service in your load balancer and creating a new path-rule in it to redirect the requests.
And now the details:
Create a new bucket - pick the name you like (outairnet-static or something that will be meaningful to you so you don't delete by accident). You can ignore all the tutorials telling that it has to have the exact name of your domain - since it will only be hosting a file accessible under outairnet.com/mylink/ it will work regardless of the name used. I tested it.
Create a directory in your bucket named exactly ax the path under which you want it to be. If you want outairnet.com/mylink/ then directory's name has to be mylink. Upload your files into that directory. Name your main index file index.html unless you want to provide full file path.
Make the bucket avaialble to everyone.
Go to your LB configuration and edit backend services; add a new backend bucket.
Go to your Host and Path Rules and configure a new path; Enter the name of your site and the path (Remember that the path value must be /mylink/*.) and select the bucket you've just created from the dropdown list.
No changes necessary for the frontend. Save the changes and in a few moments it should be working.
I just added another path rule with just "/" directing to the VM and it seemed to do it, but now the only glitch is www.outairnet.com/map is fine but outairnet.com/map without www directs to the vm and not the bucket

AWS API Gateway Integration Request setup automation

I am trying to use the AWS API Gateway+Swagger to route the request to my express backend. I can't figure out how to automate the setup of the integration request as the Swagger file has no details on it.
I'm also having difficulty with the endpoint url parameter when setting my method requests to GET/VPC Link on the integration type
For example:
My api gateway path is /info/car/{model}/aggregate
Now the endpoint url is http://carapi.com/info/car/{model}/aggregate
I have lots of gateway paths all of which are the same paths that my carapi.com site uses, so I don't want to keep retyping the path over and over. When entering in the endpoint url, I was able to simplify not having to type carapi.com by using stage variables turning my endpoint url to
http://${stageVariables.carApi}/info/car/{model}/aggregate
However after reading https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html#stagevariables-template-reference
I see that there is also a $context available, but it gives me an error(when I try to call the api in postman, says 'internal server error' for the message) when I try to implement the context (which through that link shows that I can implement the path).
http://${stageVariables.carApi}/${context.resourcePath}
So my question is: how do I automate the setup of Integration requests so I don't have to manually setup each and every one(as I have hundreds of paths)? Is there also anyway to not have to set the paths manually for the endpoints?
I don't know of any solution which is ready to use.
In my project, we wrote a custom solution that downloads the application Swagger json, parses it, adds the required integration and generates another json file. Then we import that into API Gateway. This has been converted as a Jenkins job and runs as a post job of microservice deployment.
Another way is, instead of generating the json, directly call API Gateway APIs and add the integration.

AWS S3 Redirect only works on bucket as a subdomain not bucket as a directory

Many people have received 100s of links to PoCs that are on an internal facing bucket and the links are in this structure.
https://s3.amazonaws.com/bucket_name/
I added a redirect using AWS's Static website hosting section in Properties and it ONLY redirects when the domain is formatted like this:
https://bucket_name.s3-website-us-east-1.amazonaws.com
Is this a bug with S3?
For now, how do I make it redirect using both types of links? My current workaround is to add a meta redirect tag in each html file.
The s3-website is the only endpoint that supports redirects unfortunately. Using the s3.amazonaws.com supposes that you will be using S3 as a storage layer, instead of a website. If the link is to a specific object, you can place an HTML file at that url with a JS redirect, but other than that there is really no way to achieve what you are trying to do.
In the future, i would recommend always setting up a Cloudfront distribution for those kinds of usecases, as that will allow you to change the origin later on.

Google Cloud Storage custom error messages

I am using Google cloud storage as CDN to store file for our website which is hosted on Fastly.
In case of PDF files, we are doing a redirect to URL of PDF file in google cloud storage.
Everything works fine except in case if the user manipulates the file location in URL (which is used to build create google storage object URL). In such case google storage display error message in XML format as follow:
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
</Error>
Such message is fine for dev environments however on production this is not something we can show to the user in a browser.
So I want to understand is there any provision in Google cloud storage to customize these messages and pages.
Thanks in advance,
Yogesh
The best way I know of to avoid this error would be to use GCS's static website hosting feature. To do so, you'd purchase some domain name, create a GCS bucket that matches that domain name, then specify the "NotFoundPage" property of the website configuration to be an object with whatever you'd like the appropriate error to be. The main downside here is that this would only work over HTTP, not HTTPS.
For more on how to set up static website config, see https://cloud.google.com/storage/docs/hosting-static-website

Uploading data to Amazon S3 directly from a URL [duplicate]

Is it possible to upload a file to S3 from a remote server?
The remote server is basically a URL based file server. Example, using http://example.com/1.jpg, it serves the image. It doesn't do anything else and can't run code on this server.
It is possible to have another server telling S3 to upload a file from http://example.com/1.jpg
upload from http://example.com/1.jpg
server -------------------------------------------> S3 <-----> example.com
If you can't run code on the server or execute requests then, no, you can't do this. You will have to download the file to a server or computer that you own and upload from there.
You can see the operations you can perform on amazon S3 at http://docs.amazonwebservices.com/AmazonS3/latest/API/APIRest.html
Checking the operations for both the REST and SOAP APIs you'll see there's no way to give Amazon S3 a remote URL and have it grab the object for you. All of the PUT requests require the object's data to be provided as part of the request. Meaning the server or computer that is initiating the web request needs to have the data.
I have had a similar problem in the past where I wanted to download my users' Facebook Thumbnails and upload them to S3 for use on my site. The way I did it was to download the image from Facebook into Memory on my server, then upload to Amazon S3 - the full thing took under 2 seconds. After the upload to S3 was complete, write the bucket/key to a database.
Unfortunately there's no other way to do it.
I think the suggestion provided is quite good, you can SCP the file to S3 Bucket. Giving the pem file will be a password less authentication, via PHP file you can validate the extensions. PHP file can pass the file, as argument to SCP command.
The only problem with this solution is, you must have your instance in AWS. You can't use this solution if your website is hosted in other Hosting Providers and you are trying to upload files straight to S3 Bucket.
Technically it's possible, using AWS Signature Version 4, Assuming your remote server is the customer in the image below, you could prepare a form in the main server, and send the form fields to the remote server, for it to curl it. Detailed example here.
you can use scp command from Terminal.
1)using terminal, go to the place where there is that file you want to transfer to the server
2) type this:
scp -i yourAmazonKeypairPath.pem fileNameThatYouWantToTransfer.php ec2-user#ec2-00-000-000-15.us-west-2.compute.amazonaws.com:
N.B. Add "ec2-user#" before your ec2blablbla stuffs that you got from the Ec2 website!! This is such a picky error!
3) your file will be uploaded and the progress will be shown. When it is 100%, you are done!