I'm working on https://github.com/Jigsaw-Code/outline-client trying to publish our binaries to S3 instead of Github.
In src/electron/release_linux_action.sh I've changed the invocation of electron-builder to use
--config.publish.provider=s3 \
--config.publish.bucket=*my-play-bucket-name*
I have my credentials in ~/.aws/credentials
Instead of our usual artifacts being published, my bucket now contains a bunch of tiny, weirdly named files
Downloading and inspecting these files shows that they're logs of HTTP/1.1 sessions:
...
"PUT /my-bucket-name/2019-05-01-17-25-02-2391717FC94ECECB HTTP/1.1" 200...
...
This is obviously not what anyone wants, but I'm not sure where to go from here. I can aws s3 sync our build directory to the bucket just fine, but that would involve giving the project a dependency on the aws cli and make us responsible for publishing correctly as opposed to letting electron-builder do it for us. Any ideas?
Related
I have a seemingly simple task of trying to deploy a react app to an S3 bucket to use it as a website. I have followed several tutorials and attempted this numerous times and numerous ways. All of them have failed and I am finding this incredibly frustrating and the docs are no help.
Following this example, I set up a pipeline. I don't need a build, just a deploy that triggers on github push to deploy the built code to the S3 bucket.
I believe that my first problem came at the object key - something which simply isn't in a single AWS doc or example that I could find.
Do I need to enter the files in the build folder, or the files in the public folder. If I enter an object key of "build/index.html", it pulls those files into the S3 bucket in example that same way, which is obviously not ideal.
I believe that these object keys are the files that the code pipeline is going to pull over - although an explanation of that somewhere would be nice. So, how do I enter the object keys for a react app?
I also tried uploading the files manually to the S3 bucket and using that as a website, but it simply downloaded the files instead of running the code and viewing it as a website.
Check "Extract file before deploy" then you can skip Deployment path as empty if you want to deploy your files in root directory.
When check "Extract file before deploy" S3 bucket Key field will be disappeared.
Given quite a steep cost of Cloudinary as multimedia hosting service (images and videos), our client decided that they want to switch to AWS S3 as file hosting.
The problem is that there are a lot of files (thousands of images and videos) already in the app, so merely switching the provider is not enough - we need to also migrate all the files and make it look like nothing really changed for the end user.
This topic is somehow covered on Strapi forum: https://forum.strapi.io/t/switch-from-cloudinary-to-s3/15285, but there is no solution posted besides vaguely described procedure.
Is there a way to reliably perform the migration, without losing any data and without the need to change anything on client (apps that communicate with Strapi by REST/GraphQL API) side?
There are three steps to perform the migration:
switch provider from Cloudinary to S3 in Strapi
migrate files from Cloudinary to S3
perform database update to reroute Strapi from Cloudinary to S3
Switching provider
This is the only step that is actually well documented, so I will be brief here.
First, you need to uninstall your Cloudinary Strapi plugin by running yarn remove #strapi/provider-upload-cloudinary and install S3 Plugin by running yarn add #strapi/plugin-sentry.
After you do that, you need to create your AWS infrastructure (S3 bucket and IAM with sufficient permissions). Please follow official Strapi S3 plugin documentation https://market.strapi.io/providers/#strapi-provider-upload-aws-s3 and this guide https://dev.to/kevinadhiguna/how-to-setup-amazon-s3-upload-provider-in-your-strapi-app-1opc for steps to follow.
Check that you've done everything correctly by logging in to your Strapi Admin Panel and accessing Media Library. If everything went well, all images should be missing (you will see all metadata like sizes and extensions, but not actual images). Try to upload new image by clicking on 'Add new assets' button. This image should upload successfully and also appear in your S3 bucket.
After everything works as described above, proceed to actual data migration.
Files migration
Most simple (and error resistant) way to migrate files from Cloudinary to S3 is to download them locally, then use AWS Console to upload them. If you have only hundreds (or low thousands) of files to migrate, you might actually used Cloudinary Web UI to download them all (there is a limit of downloading 1000 files at once from Cloudinary Web App).
If this is not suitable for you, there is a CLI available that can easily download all files using your terminal:
pip3 install cloudinary-cli (download CLI)
cld config -url {CLOUDINARY_API_ENV} (api env can be found on first page you see when you log into cloudinary)
cld -C {CLOUD_NAME} sync --pull . / (This step begins the download. Based on how much files you have, it might take a while. Run this command from a directory you want to download the files in. {CLOUD_NAME} can be find just above {CLOUDINARY_API_ENV} on Cloudinary dashboard, you should also see it in after running second command in your terminal. For me, this command failed several times in the middle of the download, but you can just run it again and it will continue without any problem.)
After you download files to your computer, simply use drag and drop S3 feature to upload them into your S3 bucket.
Update database
Strapi saves links to all files in database. This means that even though you switched your provider to S3 and copied all files, Strapi still doesn't know where to find these files as links in database point to Cloudinary server.
You need to update three columns in Strapi database (this approach is tested on Postgres database, there might be minor changes when using other databases). Look into 'files' table, there should be url, formats and provider columns.
Provider column is trivial, just replace cloudinary by aws-s3.
Url and formats are harder as you need to replace only part of the string - to be more precise, Cloudinary stores urls in {CLOUDINARY_LINK}/{VERSION}/{FILE} format, while S3 uses {S3_BUCKET_LINK}/{FILE} format.
My friend and colleague came up with following SQL query to perform the update:
UPDATE files SET
formats = REGEXP_REPLACE(formats::TEXT, '\"https:\/\/res\.cloudinary\.com\/{CLOUDINARY_PROJECT}\/((image)|(video))\/upload\/v\d{10}\/([\w\.]+)\"', '"https://{BUCKET_NAME}.s3.{REGION}/\4"', 'g')::JSONB,
url = REGEXP_REPLACE(url, 'https:\/\/res\.cloudinary\.com\/{CLOUDINARY_PROJECT}\/((image)|(video))\/upload\/v\d{10}\/([\w\.]+)', 'https://{BUCKET_NAME}.s3.{REGION}/\4', 'g')
just don't forget to replace {CLOUDINARY_PROJECT}, {BUCKET_NAME} and {REGION} with correct strings (easiest way to see those values is to access the database, go to files table and check one of the old urls and url of file you uploaded at the end of Switching provider step.
Also, before running the query, don't forget to backup your database! Even better, make a copy of production database and run the query on it before you mess with the production.
And that's all! Strapi is now uploading files to S3 bucket and you also have access to all the data you previously had on Cloudinary.
In the past few days, I created my first GCP project and hosted my Angular app in a Storage bucket. (The app.yaml and dist folder.) This worked great, got everything running and pointed a domain at it. I rolled out one update to the dist folder a day or two ago.
Today I did some updates to the app and generated a new dist hoping to update the app, but now I am finding I cannot update any content in the bucket at all. If I click "upload folder / file" or drag in content to upload, it shows a loading snackbar for a split second, then immediately says "0 files uploaded successfully". No error is given, but nothing is uploaded.
If I click "create folder" it gives me the error message
"Unable to create your folder. Try again or contact your
administrator."
But I am the owner / creator of the project, the bucket, etc.
Things I have tried:
Double checked permissions and given myself specific permission for the bucket
Confirming I can make content in other buckets (I can)
Uploading content into sub-folders of the dist (Same errors as before)
I'm trying to find a way to get more definitive error logs from this, but the feedback I'm getting from the UI is very vague at the moment. I'm new to GCP, so I'm not sure where to look to get something more informative.
The GCP UI swallows all errors. It would fail without presenting any reason why. Use gsutil commands to actually surface the errors. In my case I ran this command:
gsutil cp testfilename gs://my-bucket-name
That threw the following error:
AccessDeniedException: 403 The project to be billed is associated with a delinquent billing account.
The project existed on a generic billing account named "My Billing Account" that was auto generated. I had created a new billing account a day prior to the bug in order to run gcloud app deploy on the API project (separate gcloud project from the project hosting the broken bucket). After this billing account was made, it made the generic billing account defunct, but it did not alert me to this in any part of the UI. Only the gsutil command exposed the error. No debug tool errors, no UI errors. The generic "My Billing Account" that they created doesn't even appear in the possible list of billing accounts to select from.
So lesson learned: don't trust the GCP UI. Use command line tools to expose errors.
I am utilizing AWS cloudfront with an S3 origin. I'm using a webpack plugin to cache-bust using chunked hash file names for all of my static files excluding index.html, which I will simply invalidate using the cloudfront feature upon each new release.
I plan on using a jenkins build to run aws s3 sync ./dist s3://BUCKET-NAME/dist
--recursive --delete which will swap out the new chunked files as necessary. Then I will overwrite the index.html file to use the new chunked reference. During the few seconds (max) it takes to swap out the old files for new, it is possible that a user will make a request to the website from a region in which cloudfront has not cached the resources, at which point they'll be unavailable because I have just deleted them.
I could not find any information about avoiding this edge case.
Yes, it can happen that a person near a different edge location experience the missing files. To solve this, you need to change the approach of doing new deployments since cache busting and time is unpredictable at request-response level. One commonly used pattern is to keep different directories(paths) for each new deployment in S3 as follows.
For release v1.0
/dist/v1.0/js/*
/dist/v1.0/css/*
/dist/index.html <- index.html for v1.0 release which has reference for js & css in /dist/v1.0 path
For release v1.1
/dist/v1.1/js/*
/dist/v1.1/css/*
/dist/index.html <- index.html for v1.1 release which has reference for js & css in /dist/v1.1 path
After each deployment, a user will receive either the old version(v1.0) or new version(v1.1) of the index.html, which will still working during the transition period until the edge cache is busted.
You can automate the versioning with Jenkins either incrementing the version or using parameterize build plugin.
This will also be useful to do immutable deployments, where in a case of a critical issue, you can rollback to the previous deployments. Apart from that you can configure S3 lifecycle management rules to archive the older versions.
A library called Stout can do this all for you automatically. It's a major time saver. I have no association with them, I just really like it.
A few benefits:
Can help you create new buckets if you want it to
Versions your script and style files during each deploy to ensure your pages don't use an inconsistent set of files during or after a deploy
Supports rollback to any previous version
Properly handles caching headers such as TTL
Compresses files for faster delivery
Usage:
stout deploy --bucket my-bucket-name --root path/to/website
Here is how I solved that problem.
Just deleting flat will not solve the issue.
Since you have chunked hash file names I assume you have only index.html that is not hashed filename.
Collect all old files which need to be deleted
aws s3 ls s3://bucket
Deploy all files from your new build.
aws s3 cp ./dist s3://bucket
Remove old files now either with mv or delete
aws s3 rm files you collected before except index.html
Your new site will be served with the new app now.
Hope it helps.
I have been recently been trying to set up a cydia repository using Amazon AWS S3 service. I have uploaded the required files to get this to work and yet it still does not. My file structure is
Release
Packages.gz
Packages
mydeb.deb
These files are all in the same folder and the Packages.gz is linked correctly.
When I try to add the repository to Cydia I get a HTTP 404 error saying it could not find Packages.gz. Any comments?
I had the same issue, and found a workaround.
Sync all files to s3://[your_bucket]/./
AmazonS3 is a simple KVS(Key-Value Store). Maybe Cydia requests http://[repo-url]/./Packages.gz, but S3 can't resolve /./.