Netlify implementation / hosting of Svelte endpoints (AWS?) - amazon-web-services

I have set up a REST endpoint (simple http GET) using Sveltekit's endpoints implementation. It works in 'npm run dev' mode when it's running on my computer, but certain requests give errors when hosted on Netlify. The specific error message is straightforward:
{"errorMessage":"Response payload size exceeded maximum allowed payload size (6291556 bytes).","errorType":"Function.ResponseSizeTooLarge"}
I know I can fix this by splitting up and paging results.
My question is: Does Netlify use AWS Lambda to host the compiled Svelte endpoints even if I don't use Netlify's serverless functions explicitly?
I ask because searching this error message gives results about Netlify/AWS lambda, and I'm just curious to know if anyone knows concretely how Netlify handles these svelte endpoints behind the scenes. It really looks like they bundle them into AWS Lambda functions (which they already advertise they are using for their own Netlify functions).

Svelte uses #sveltejs/adapter-netlify under-the-hood: https://www.npmjs.com/package/#sveltejs/adapter-netlify to work on Netlify. This package generates a Netlify Function named render which handles the Server Side Rendering of your Svelte application.
So to answer your question, yes, you're using Netlify Functions in your project - not yourself, but your framework is doing that for you.

Related

AWS API Gateway Integration Request setup automation

I am trying to use the AWS API Gateway+Swagger to route the request to my express backend. I can't figure out how to automate the setup of the integration request as the Swagger file has no details on it.
I'm also having difficulty with the endpoint url parameter when setting my method requests to GET/VPC Link on the integration type
For example:
My api gateway path is /info/car/{model}/aggregate
Now the endpoint url is http://carapi.com/info/car/{model}/aggregate
I have lots of gateway paths all of which are the same paths that my carapi.com site uses, so I don't want to keep retyping the path over and over. When entering in the endpoint url, I was able to simplify not having to type carapi.com by using stage variables turning my endpoint url to
http://${stageVariables.carApi}/info/car/{model}/aggregate
However after reading https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html#stagevariables-template-reference
I see that there is also a $context available, but it gives me an error(when I try to call the api in postman, says 'internal server error' for the message) when I try to implement the context (which through that link shows that I can implement the path).
http://${stageVariables.carApi}/${context.resourcePath}
So my question is: how do I automate the setup of Integration requests so I don't have to manually setup each and every one(as I have hundreds of paths)? Is there also anyway to not have to set the paths manually for the endpoints?
I don't know of any solution which is ready to use.
In my project, we wrote a custom solution that downloads the application Swagger json, parses it, adds the required integration and generates another json file. Then we import that into API Gateway. This has been converted as a Jenkins job and runs as a post job of microservice deployment.
Another way is, instead of generating the json, directly call API Gateway APIs and add the integration.

HTTP 407 Proxy Authentication Required while accessing Amazon S3

I have tried everything but I cant seem to fix this issue that is happening for only one client behind a corporate proxy/firewall. Our Silverlight application connects to Amazon S3 for downloading/Uploading some documents. On one client and one client only it returns a 407 error and after that the application fails to save anything.
Inner Exception:
System.ServiceModel.ProtocolException: [UnexpectedHttpResponseCode]
Arguments: 407,Proxy Authentication Required
We had something similar at a different client but there was more of a CORS issue. to resolve this I used cloud-front to fake a sub-domain that then accesses the S3 bucket and it solved the issue. I was hoping it would fix it with this client as well but it didnt.
I have tried adding this code to web.config as suggested by a lot of answers
<system.net>
<defaultProxy useDefaultCredentials="true" >
</defaultProxy>
</system.net>
I have read articles about passing a proxy headers with basis authentication using username and password but I am not sure how this would help us. The Proxy server is used by client and any authentication it requires is outside our domain.
**Additional Information**
The Silverlight code references 2 services. One is our wcf service that retrieves all the data for the application. One is The Amazon S3 service that uses the amazon Soap api, the endpoint for which is at http://s3.amazonaws.com/doc/2006-03-01/AmazonS3.wsdl?
If I go into our app and only use part of the system that dont make any calls to the Amazon S3 api the application works fine. As soon as I go to a part of the system that makes a call to the S3, the problem starts. funny enough the call to S3 goes fine and I can retrieve the doc fine but then any calls to our wcf service return 407.
Any ideas?
**Update 2**
Based on comments from Elliot Nelson I check the stack we were using for making http requests in our application. Turns out we are using client http for both http and https requests by default. Here is the code we have in the App.xaml constructor
public App()
{
Startup += Application_Startup;
UnhandledException += Application_UnhandledException;
InitializeComponent();
WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp);
WebRequest.RegisterPrefix("https://", WebRequestCreator.ClientHttp);
}
Now, to understand the differences between clienthttp and browserhttp and when to use them. Also, the potential impacts/issues of switching to browserhttp.
**Update 3**
Is there a way to request browsers to run your in-browser Silverlight application in trusted mode and would it help bypass this issue?
(Answer #2)
So, most likely (for corporate environments like this network), almost nothing can be done without whatever custom proxy settings are set in IE, usually pushed by corporate policy. To take advantage of these proxy settings, you want to use WebRequestCreator.BrowserHttp, which automatically uses the browser's default settings when making requests.
There's a table of the differences between these two clients available in the Microsoft docs. I'm guessing you were using something (maybe setting custom headers or reading the raw response body) that wasn't supported in BrowserHttp.
For security reasons, you can't "ask" the browser what its proxy settings are and use them, so this is a tricky situation. You can specify Browser vs Client handling by domain, or even for a specific request (the same page above describes how); you may be able in this case to get away with just using ClientHttp for your service calls and BrowserHttp for your S3 calls, and avoid the problem altogether!
For next steps, I'd try that approach; if it doesn't work, I'd try switching wholesale to BrowserHttp just to see if it bypasses the proxy issue (there's almost no chance the application will actually work, since you're probably using ClientHttp-only options).
Long term, you may want to consider making changes to your services so they are usable by a BrowserHttp-only application (this would require you to be pretty basic in your requests/responses, but using only BrowserHttp would be a guarantee you'd work in pretty much any corp network).
Running in trusted mode is probably a group policy thing which would require their AD admins to approve / whitelist your app.
I think the underlying issue you are facing is that the proxy requires NTLM authentication and for whatever reason the browser declines to provide your app with that context.
One way to prove that it's an NTLM auth issue is to test with curl - get it to make a req through the proxy, then it should be a bit easier to code to. EG the following curl will get you through 99% of Windows corporate proxies (assuming the proxy is at proxy-host.corp:3128):
C:\> curl.exe -v --proxy proxy-host:3128 --proxy-user : --proxy-ntlm https://www.google.com
NOTE The --proxy-user : tells curl to use the current user session to perform the NTLM challenge.
So if you can get the client to run that, you can at least identify that NTLM works, then it's a just a matter of getting the app to perform the NTLM challenge using the default credentials (which may or may not be provided by the browser session)
Since you described this as a silverlight application, I'm going to assume you can't use classic browser-proxy troubleshooting like "move browser to public network" or "try a different browser", to isolate the problem.
You should try to isolate the proxy server, and have the customer use the required proxy-auth.
The application is making request, but it might be intercepted by a transparent proxy, or the result might be coming from what you consider a web server.
In the early days, the 401 error was pretty strictly associated with web-auth, and 407 was for proxy-auth.
Architecturally, the separation is a convenience, a web server can have both web server, proxy, and reverse-proxy behaviors.
What happens is your customer's environment is making a web connection to the destination, but it receives a HTTP 407 status from some host, probably their network, or sometimes the provider. Almost certainly the request is received not forwarded. The HTTP client your application lives in needs to provide the credentials that host requires. Companies have environments that are complex enough where often your customer will say this is the first time they have heard of this (some proxy-auth is also dynamic or destination specific).
Also, in some corporate environments, the operator will allow temporary or permanent white-listing from the proxy-auth service. You should see if they can do this, even temporarily, to confirm there aren't going to be other problems.
In the end, it sounds like your application might not robustly support proxy-auth, or the proxy-auth type they use in their environment.

Lambda function custom domain

I have been messing around with AWS lambda today trying some things out. I am currently trying to trigger the function from a url in a browser.
The URL looks similar to this: https://abcdef.execute-api.eu-west-2.amazonaws.com/default/test
As I understand it I can assign a custom domain to my endpoint, but can I also get rid of the path part of the url, so for example:
GET: https://example.com/
GET: https://example.com/somefile.txt
POST: https://example.com/ ['some_post_field' => 'some data']
Will all be passed to my function, or do I need to configure an EC2 instance with NGINX to proxy-pass the requests to lambda?
Any thoughts would be useful.
There are now a couple different ways you can accomplish this in AWS:
The newest (arguably coolest!) is to use Cloudfront to run your code using their Lambda#Edge service. You can completely customize your URL path and have portions used as variables like any other REST endpoint. You attach your Lambda fn to "behaviour" endpoints which give you full URL control. Its fairly deep and beyond the scope of your question to explain it all here, but read through the docs at the link provided and you'll likely see lots of stuff you like.
Another older, more expensive but more documented method is to use AWS's API Gateway as you have eluded to in your question's tags. It has a great front end console and is easy to connect API endpoints to your Lambda backend logic by attaching them to REST methods. The console helps you "variable-ize" your URL with form field data. This service helps you the most with custom domains to trigger from. Setting up custom domains is a snap in API Gateway. Be sure to use AWS's SSL Certificate Manager for free SSL certs on your custom domain too!
How you specifically setup your endpoints depends on which service you choose. Personally, given your desire to serve up different types of content, I would lean towards CloudFront, and define a "behaviour" URL for your dynamic Lambda content. If the URL request does not match one of your defined behaviours, it defaults to the Cloudfront cache/origin to serve your static assets (somefile.txt). Only matches go to your attached Lambda fn with form data. Very slick!
A lot of example Lambda#Edge fn's are available here.
I have used both and have clients on both now. Lambda#Edge is ridiculously faster and less expensive, BUT is less documented, has a steeper learning curve, and console is not nearly as helpful. I would honestly try both to see which fits your situation and experience level best. Both will get the job done. EC2 is most definitely NOT needed (nor desired perhaps). Hope that helps — good luck!
Instead of directly exposing the Lambda function via URL, expose it through AWS API Gateway where you can define your own paths and map to a Domain.

Setting CORS via API Gateway for Serverless Architecture Model Proxy Endpoint

I am working on a C#/.Net serverless application using the AWS Visual Studio Toolkit, and I am having a bit of trouble figuring out what I am missing as far as CORS configuration. I based my project off of the ASP.Net example included with the toolkit, which configured API Gateway to have a single API endpoint that works as a proxy into the ASP.Net Web API framework.
In testing this application in chrome (serving a local node project) I am getting No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8080' is therefore not allowed access.
I know this means I have to configure CORS on the API Gateway endpoint, but I seem to be missing something. I use the actions dropdown to enable CORS as such...
But I get some errors and the problem persists.
I used a chrome extension to disable CORS (temporarily) and have confirmed that the API endpoint works normally without CORS.
So what am I missing here? The examples of setting CORS online don't usually have instructions of a catch-all endpoint like this is set up to use, and even breaking GET into its own method didn't seem to help.
As an additional question, if there is some CORS configuration I am missing, is there a good way to get it integrated into the serverless.template file or some other automated deploy step?
It has to do with your ANY proxy method. As stated here: http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html
Note
When applying the above instructions to the ANY method in a proxy integration, any applicable CORS headers will not be set. Instead, you rely on the integration back end to return the applicable CORS headers, such as Access-Control-Allow-Origin
So you will have to make your backend API return the appropriate CORS headers.
You need to have the header on your server as well as the api gateway.
See this sample: - The cors header is applied to the static bucket website
https://www.codeproject.com/Articles/1178781/Serverless-Architecture-using-Csharp-and-AWS-Amazo
For the APIs to work properly two things must be done:
1. The options method must be correctly setup - usually done using a mock method on the API gateway.
2. The HTTP method implementations in your code must return the CORS header correctly. There are quite a few articles about this if you search.
For me the problem was Point 1; using the API Gateway 'Enable CORS' button did not work for me when I was developing API-Gateway Lambda integration using .NET Core. I also didn't find a way to add creation of the options method in the serverless.template file.
Here's another way to do it; after publishing the lambdas from CLI or VisualStudio, fire a PUT request on the API endpoint and pass a swagger definition which contains the options method defs and ensure you set the query param mode=merge. You can use PostMan to do this.
or
You use a DotNet utility which does the same thing explained here:
http://sbytestream.pythonanywhere.com/blog/Enabling-APIGateway-CORS
The source code is available on GitHub too.

Using fineuploader, how can I use a HTTPS endpoint?

I am relatively new to javascript, and I got an uploader tool called fineuploader that I was considering to use. However locally (development machine) I got it to work (vb.net), but when I put it on my external server, I noticed that there is a post done in http and directly to the server's domain name (e/g/ mydomain.com), instead of mydomain.com/testproject. The site only allows for https traffic.
Is there an easy way to change this? (so it should point at https://mydomain.com/testproject/FileUpload.aspx
The code used by fineuploader shows a parameter called 'endpoint: '/FileUpload.aspx'
Do I have to make changes in the settings of IIS for this webservice?
If you need to enable CORS (cross-domain requests), Fine Uploader supports this. You should read my blog post on how CORS support is implemented in Fine Uploader and how you can support such requests in your server-side code.