AWS Code Pipeline - Deploying a React app to S3 bucket - amazon-web-services

I have a seemingly simple task of trying to deploy a react app to an S3 bucket to use it as a website. I have followed several tutorials and attempted this numerous times and numerous ways. All of them have failed and I am finding this incredibly frustrating and the docs are no help.
Following this example, I set up a pipeline. I don't need a build, just a deploy that triggers on github push to deploy the built code to the S3 bucket.
I believe that my first problem came at the object key - something which simply isn't in a single AWS doc or example that I could find.
Do I need to enter the files in the build folder, or the files in the public folder. If I enter an object key of "build/index.html", it pulls those files into the S3 bucket in example that same way, which is obviously not ideal.
I believe that these object keys are the files that the code pipeline is going to pull over - although an explanation of that somewhere would be nice. So, how do I enter the object keys for a react app?
I also tried uploading the files manually to the S3 bucket and using that as a website, but it simply downloaded the files instead of running the code and viewing it as a website.

Check "Extract file before deploy" then you can skip Deployment path as empty if you want to deploy your files in root directory.
When check "Extract file before deploy" S3 bucket Key field will be disappeared.

Related

AWS Cloudformation nested stacks templates - how do handle local development and versioning?

Just started a simple AWS serverless project to test it out, so I'm developing locally and hosting the project on GitLab.
Wanna try nested stacks just to split current template file into smaller pieces, but TemplateUrl property value must be an url to a template file located in an S3 bucket, so I can't simply move my stack resources to another local yaml file and just include it in the parent one.
Manually upload nested stacks template files to an S3 bucket and than running sam sync from my console looks too intricate IMHO, and setting up a pipeline that take care of all the process too looks too much work for a simple personal learning project.
The fastest solution seems to be replace a deplyment pipeline with a script that can be run locally.
I know AWS cloud services are meant for enterpries-grade projects, but I'm wondering if there is a simpler and built-in/official way to handle all of this.

Strapi - how to switch and migrate from Cloudinary to S3 in production

Given quite a steep cost of Cloudinary as multimedia hosting service (images and videos), our client decided that they want to switch to AWS S3 as file hosting.
The problem is that there are a lot of files (thousands of images and videos) already in the app, so merely switching the provider is not enough - we need to also migrate all the files and make it look like nothing really changed for the end user.
This topic is somehow covered on Strapi forum: https://forum.strapi.io/t/switch-from-cloudinary-to-s3/15285, but there is no solution posted besides vaguely described procedure.
Is there a way to reliably perform the migration, without losing any data and without the need to change anything on client (apps that communicate with Strapi by REST/GraphQL API) side?
There are three steps to perform the migration:
switch provider from Cloudinary to S3 in Strapi
migrate files from Cloudinary to S3
perform database update to reroute Strapi from Cloudinary to S3
Switching provider
This is the only step that is actually well documented, so I will be brief here.
First, you need to uninstall your Cloudinary Strapi plugin by running yarn remove #strapi/provider-upload-cloudinary and install S3 Plugin by running yarn add #strapi/plugin-sentry.
After you do that, you need to create your AWS infrastructure (S3 bucket and IAM with sufficient permissions). Please follow official Strapi S3 plugin documentation https://market.strapi.io/providers/#strapi-provider-upload-aws-s3 and this guide https://dev.to/kevinadhiguna/how-to-setup-amazon-s3-upload-provider-in-your-strapi-app-1opc for steps to follow.
Check that you've done everything correctly by logging in to your Strapi Admin Panel and accessing Media Library. If everything went well, all images should be missing (you will see all metadata like sizes and extensions, but not actual images). Try to upload new image by clicking on 'Add new assets' button. This image should upload successfully and also appear in your S3 bucket.
After everything works as described above, proceed to actual data migration.
Files migration
Most simple (and error resistant) way to migrate files from Cloudinary to S3 is to download them locally, then use AWS Console to upload them. If you have only hundreds (or low thousands) of files to migrate, you might actually used Cloudinary Web UI to download them all (there is a limit of downloading 1000 files at once from Cloudinary Web App).
If this is not suitable for you, there is a CLI available that can easily download all files using your terminal:
pip3 install cloudinary-cli (download CLI)
cld config -url {CLOUDINARY_API_ENV} (api env can be found on first page you see when you log into cloudinary)
cld -C {CLOUD_NAME} sync --pull . / (This step begins the download. Based on how much files you have, it might take a while. Run this command from a directory you want to download the files in. {CLOUD_NAME} can be find just above {CLOUDINARY_API_ENV} on Cloudinary dashboard, you should also see it in after running second command in your terminal. For me, this command failed several times in the middle of the download, but you can just run it again and it will continue without any problem.)
After you download files to your computer, simply use drag and drop S3 feature to upload them into your S3 bucket.
Update database
Strapi saves links to all files in database. This means that even though you switched your provider to S3 and copied all files, Strapi still doesn't know where to find these files as links in database point to Cloudinary server.
You need to update three columns in Strapi database (this approach is tested on Postgres database, there might be minor changes when using other databases). Look into 'files' table, there should be url, formats and provider columns.
Provider column is trivial, just replace cloudinary by aws-s3.
Url and formats are harder as you need to replace only part of the string - to be more precise, Cloudinary stores urls in {CLOUDINARY_LINK}/{VERSION}/{FILE} format, while S3 uses {S3_BUCKET_LINK}/{FILE} format.
My friend and colleague came up with following SQL query to perform the update:
UPDATE files SET
formats = REGEXP_REPLACE(formats::TEXT, '\"https:\/\/res\.cloudinary\.com\/{CLOUDINARY_PROJECT}\/((image)|(video))\/upload\/v\d{10}\/([\w\.]+)\"', '"https://{BUCKET_NAME}.s3.{REGION}/\4"', 'g')::JSONB,
url = REGEXP_REPLACE(url, 'https:\/\/res\.cloudinary\.com\/{CLOUDINARY_PROJECT}\/((image)|(video))\/upload\/v\d{10}\/([\w\.]+)', 'https://{BUCKET_NAME}.s3.{REGION}/\4', 'g')
just don't forget to replace {CLOUDINARY_PROJECT}, {BUCKET_NAME} and {REGION} with correct strings (easiest way to see those values is to access the database, go to files table and check one of the old urls and url of file you uploaded at the end of Switching provider step.
Also, before running the query, don't forget to backup your database! Even better, make a copy of production database and run the query on it before you mess with the production.
And that's all! Strapi is now uploading files to S3 bucket and you also have access to all the data you previously had on Cloudinary.

How do I put AWS Amplify project into CodeCommit?

I am just starting to use AWS Amplify but can't figure out how you are supposed to commit the project to a source code repository so that others can work on the same project.
I created an react Serverless project 'web_app' and have created a few APIs and a simple front end application and now want to commit this to CodeCommit so it can be accessed by others.
Things get a bit confusing now because for the CI/CD it seems once should create a repository for the front end application - usually the source files are in the 'web_app/src' folder.
But Amplify seems to have already created a git repository at the 'web_app' folder level so am I supposed to create a CodeCommit repository and push the 'web_app' local repo to the remote repository and then separately create another repository for the front end in order to be able to use the CI/CD functions in AWS?
For some reason if I do try and push anything to AWS CodeCommit I always get an error 403.
OK - I'll answer this myself.
You just commit the entire project to a repo in CodeCommit. The project folder contains both the backend and the frontend code. The frontend code is usually in the /src folder and the backend code (CloudFormation files) is usually in the amplify folder.
Once you have the CodeCommit repo setup you can use the Amplify Console or the amplify-cli to create a new backend or frontend environment. Amplify is smart enough to know where to find the backend and frontend code.
Bear in mind that the backend amplify-cli code creates a bunch of files that are placed in the frontend folder (/src), including the graphql mutations and queries that will be used in the frontend code.
If you have set up CI/CD then any 'git push' will result in a new build for the environment you are in. You can modify the build script to include or exclude rebuilding the backend - I think by default it will rebuild the backend if there are changed.
You can also manually rebuild the backend by using the amplify-cli 'amplify push' command.
Take care because things can get out of sync and it seems old files can be left lying around that cause problems. Fortunately it doesn't take long to delete and rebuild and entire environment. Of course you may have to backup and reload your data first. Having some scripts to automatically load any seed data for development or testing is useful.
There is a lot of documentation out there but a lot of it seems to be quite confusing.

Google Cloud Storage - files not showing

I have over 30 Leaflet maps hosted on my Google Cloud Platform bucket (for example) and it has always been an easy process to upload my folder (which includes an html file with sub-folders including .js and .css files) and share the map publicly.
I tried uploading another map today, but within the folder there are no files showing and I get the following message "There are no live objects in this folder. If you have object versioning enabled, this folder may contain archived versions of objects, which aren't visible in the console. You can list archived object versions using gsutil or the APIs."
Does anyone know what is going on here?
We have also seen this problem, and it seems that the issue is limited to buckets that have spaces in the name.
It's also not reproducible through the gcloud web console, but if you use gsutil to upload a file to a bucket with a space in the name then it won't be visible on the web UI.
I can see from your screenshot that your bucket also has spaces (%20 in the url).
If you need a workaround asap, you could rename your bucket...
But google should fix this soon, I hope.
There is currently open issue on GCS/Console integration
If files have any symbols that needs urlencoding - they are not visible in console - but accessible via gsutil/API (which is currently recommended as workaround)
Issue has been resolved as of 8-May-2018 10:00 UTC
This can happen if the file doesn't have an extension, the UI treats it as a folder and lets you navigate into it, showing a blank folder instead of the file contents.
We had the same symptom (files show up in API but invisible on the web and via CLI).
The issue turned out to be that we were saving files to "./uploads", which Google interprets as "create a directory literally called '.' and then a subdirectory called uploads."
The fix was to upload to "uploads/" instead of "./uploads". We also just ran a mass copy operation via the API for everything under "./uploads". All visible now!
I also had spaces in my url and it was not working properly yesterday. Checked this morning and everything is working as expected. I still have the spaces in my URL btw.

Trying to Set Up a Cydia Repo With Amazon s3

I have been recently been trying to set up a cydia repository using Amazon AWS S3 service. I have uploaded the required files to get this to work and yet it still does not. My file structure is
Release
Packages.gz
Packages
mydeb.deb
These files are all in the same folder and the Packages.gz is linked correctly.
When I try to add the repository to Cydia I get a HTTP 404 error saying it could not find Packages.gz. Any comments?
I had the same issue, and found a workaround.
Sync all files to s3://[your_bucket]/./
AmazonS3 is a simple KVS(Key-Value Store). Maybe Cydia requests http://[repo-url]/./Packages.gz, but S3 can't resolve /./.