We're using AWS Amplify to serve an Angular frontend via CI/CD (connected Github-Repo).
Amplify does not seem to compress by default resulting in larger content delivery than necessary.
I can't find any option in the app settings inside the Amplify dashboard nor a solution online.
Is it possible to use compression (gzip or brotli) with AWS Amplify?
We managed to serve gziped responses and if i remember it correctly it was a setting hidden in the cloudfront configuration.
Altough i don't remember how i've activated that option, probably through the aws-cli.
UPDATE: Amplify seems to serve gzip by default now
Related
Building my first full stack website. I have an architecture doubt.
What I have:
golang backend
react frontend
auth0 authentication
aws amplify
I am considering to create this architecture, I think that I am misleading with something. The front-end connect direct with s3 bucket to put private images there, but I am not sure if I should do it direct or send the request to my server and then the server update the s3. Searching for solutions, it seams that amplify is great for serverless, but in my application should I change amplify for cloudfront?
You can upload directly from frontend. But make sure you are sanitizing the files you upload. The downside is you will have to add you AWS S3 access credentials in frontend.
If you do it through your server, which I assume to be an AWS service you can give access to S3 for that service through IAM and no credentials required to be stored. And the downside here is an additional hop and latency while uploading big files.
It's a choice for you to make based on your requirements now.
My use case is fairly simple: I want to deploy a frontend to production that uses an Amplify backend, without exposing sensitive config like the API key.
I have a frontend that uses Github Actions for CI and CD and deploys to Zeit Now (since it's a Next.js project and needs SSR support, which Amplify currently does not provide). At the moment it does not have a backend connected so it deploys to production without any issues.
In the same project I've set up AWS Amplify for the backend and connected it to the frontend. It all works sucessfully as expected from a local environment.
Now I want to deploy the frontend to production, however the AWS config for connecting it to the backend, it's saved in an autogenerated file named aws-exports.js which contains amongst other things the GraphQL end point and its API key. This file has been added to the .gitignore by the Amplify CLI.
If I remove the aws-exports.js files from the .gitignore and commit it to the repository, I think it would probably work once deployed to production, however I assume this is not a good idea since I would be exposing sensitive config data.
I don't want to use AWS to deploy my frontend, which is what's suggested as solution in the documentation I've read about this. Is there any recommended way to do this keeping the frontend and backend environments separated? (meaning the frontend still being deployed to Zeit Now which will use the backend deployed in AWS).
As far as I understand the AWS AppSync security concept designates the auth model API_KEY to usage in either public applications or development environments.
Unauthenticated APIs require more strict throttling than authenticated APIs. One way to control throttling for unauthenticated GraphQL endpoints is through the use of API keys.
An API key is a hard-coded value in your application that is generated by the AWS AppSync service when you create an unauthenticated GraphQL endpoint.
I do not think that there is any benefit in trying to hide an API key. If authentication is required, it must be provided by other means than a hard-coded secret which is always extractable from public apps (such as web frontends).
There are more auth models described in the docs. [1]
If you are planning to develop an app with private endpoints and a public frontend/client, you should definitely use another auth model - most likely OPENID_CONNECT or AMAZON_COGNITO_USER_POOLS.
I think you should first read the AWS blog post titled GraphQL API Security with AWS AppSync and Amplify [2] and afterwards stating your question more precisely if any lack of clarity should remain.
References
[1] https://docs.aws.amazon.com/appsync/latest/devguide/security.html#api-key-authorization
[2] https://aws.amazon.com/de/blogs/mobile/graphql-security-appsync-amplify/
When attempting to initilaise awsmobile cli, it says development is being discontinued, switch to AWS Amplify cli.
AWS mobile was fantastic, in that it setup all the backend components i needed automatically. No need to use a templated project.
How on earth does AWS Apmlify help you do this easily for React Native projects??I need S3, Cognito, DynamoDB (which is less than half the price of AppSync)
Yup - looks like it's being discontinued.
On their site they recommend using Amplify CLI instead
Here is the official documentation for AWS Amplify GraphQL Client: https://aws-amplify.github.io/amplify-js/media/api_guide.html. The section supplies an example for basic String inputs though.
For AWS Mobile Appsync SDK for Javascript, there is a detailed doc here: https://docs.aws.amazon.com/appsync/latest/devguide/building-a-client-app-react.html. However, I do not want to add another configuration for it -I already have one for Amplify.
So, how to upload files to S3 storage by using AWS Amplify and AWS AppSync as the backend, what extra configuration is needed for Amplify -if there is?
Http Endpoints are added as an option to be used as datasources to AppSync schemata, but as of time, there is no S3 bucket as an option. There are solutions like [this](
https://stackoverflow.com/a/50218870/4636715), but they require AWSAppSyncClient at Javascript side which would add complexity to the client code as Amplify is set up there already imho.
So, I ended up with using Storage
of AWS Amplify -independent of AppSync. Then, I wait for the upload to be successful and call an AppSync mutation to store the key for the uploaded file in DynamoDB using regular datasource resolvers.
Sorry for doing this kind of question.. but I´m a bit lost here....
I have an app which consist in an Angular4 as frontend and Java app as Backend.
But I´m planning to use AWS Lambda as I´m interested after seeing the videos in Amazon.
The issue is that I don´t know how to get the best from AWS.
My Java app has a very time consuming task to process some images (which takes several seconds).
But I'm not sure if I can deploy all my app in Lambda, or if the idea is to use a EC2 server and then the specific task for the image processing in the lambda. Can anyone please shed some light here?
Also, the frontend app can be deploy in a lambda, or again, lambda is just for specific task?
EDIT:
The application flow would be:
The user in the angular app upload an image, the image goes to the backend server in Java and it´s stored in (maybe) a AWS bucket.. Then the Java app with imagemagick process the image and the result is store in (maybe) another bucket.
So the question is when I need to use Lambda? just to convert the image or if the full backend (and maybe frontend) app would be there?<
I'm asking because I cannot find enough information about that...
First of all you can deploy your Angular frontend to Amazon S3. Also you can use AWS CloudFront to add custom domains and free SSL certificates from Amazon using Amazon Certificate Manager for your domain. For more details refer the article Deploying Angular/React Apps in AWS.
If you don't need to show tge image processing results immediately in frontend
For the image processing backend you can use AWS API Gateway and Lambda along with S3. For this recommended flow is you can use the API Backend to get an Signed URL or AWS STS in Lambda (Or Use Cognito Federated Identities) to get temporary access to Amazon S3 Bucket to Upload the image directly to S3 from Angular App. For more details on this refer the article Upload files Securely to AWS S3 Directly from Browser.
Note: AWS recently released a JavaScript Library called AWS Amplify to simplify the implementation of the above tasks.
After Uploading the image to S3 you can setup an event driven workflow by using Amazon S3 triggers to invoke an Lambda function to perform the image processing and save the process image back to S3 (If you need to store the result).
If you need to show the result immediately
Still use tge previous approach upto Upload to S3 from frontend and then invoke an API Gateway Lambda function passing the file path in S3 to process the image.
To understand the details in connecting both frontend and backend with AWS serverless technologies refer the article Full Stack Serverless Web Apps with AWS.
As a side note, you should be able to implement the required functionality with AWS Lambda without using AWS EC2.