I have an icon assets located on s3, and have a page in angular that looks like this
<img alt="Some Logo" src="assets/icons/logo.svg" width="200">
It used to always work properly.
But recently after I apply AWS WAF for network restriction. It works for 1 day and then it never work again (never show the image anymore).
If I take a look at the developer tools, it showed up as binary/octet-stream
Instead of svg (this is back when I did not apply AWS WAF)
Any advice on how to fix this? I'm pretty sure it works even with AWS WAF at least for a day, unless there's some caching issue happening that I did not know of.
On old versions of python, the python library mimetypes didnt have a definition for svg file types.
See this issue: https://bugs.python.org/issue19377
I added '.svg' : 'image/svg+xml', into the types_map in mimetypes.py and then the aws cli got the correct type for SVGs.
Related
I'm building out a series of content websites, and I've built a working CodePipeline that allows me to push edits to HTML files on github that instantly reflect in the S3 bucket, and consequently on the live website.
I created a cloudfront distro to get HTTPS for my website. The certificate and distro work fine, and it populates with my index.html in my S3 bucket, but the changes made via my github pipeline to the S3 bucket are reflected in the S3 bucket but not the CloudFront Distribution.
From what I've read, the edge locations used in cloudfront don't update their caches super often, and when they do, they might not update the edited index.html file because it has the same name as the old version.
I don't want to manually rename my index.html file in S3 every time one of my writers needs to post a top 10 Tractor Brands article or implement an experimental, low-effort clickbait idea, so that's pretty much off the table.
My overall objective is to build something where teams can quickly add an article with a few images to the website that goes live in minutes, and I've been able to do it so far but not with HTTPS.
If any of you know a good way of instantly updating CloudFront Distributions without changing file names, that would be great. Othterwise I'll probably have to start over because I need my sites secured and the ability to update them instantly.
You people are awesome. Thanks a million for any help.
You need to invalidate files from the edge caches. It's a simple and quick process.
You can automate the process yourself in your pipeline, or you could potentially use a third-party tool such as aws-cloudfront-auto-invalidator.
As inline code editing is not enabled for the Go code on AWS Lambda, I am trying to create a Google Chrome Extention to be able to edit the Go code by referring to the text or zip code on the S3 bucket. It would be nice if I could also deploy the updated Go code on the Lambda.
I think I will have to perform the following steps from the extension-
Get the Go code from the S3 bucket or Github
Update it
Create a zip file from the updated code
Upload the zip file to the S3 bucket or Github
Deploy the updated zip file on the Lambda
I have no idea if it is a good approach or if there is any other approach possible for this. I would appreciate it if anyone can suggest to me a better approach or tell me if what I am thinking is feasible or not.
I like the idea, but unfortunately I am not sure if that is a good idea.
Let me explain:
All the languages that AWS Lambda supports which allow inline editing are more or less interpreted languages: Javascript, Python etc.
The AWS runtime for those languages reads plain text files and compiles/runs them.
Since you deploy plain text files and the runtime takes care of running them, the AWS Lambda console allows you to edit those files.
Go on the other hand, as well as supported languages like Swift or Java, need to be deployed as a "binary" (I use air quotes because Java JAR is strictly seen not a binary but byte code which is then interpreted by the JVM ..) to AWS.
The AWS Lambda runtime for those languages expects a binary and not plain text. That is why you can not edit the code of Lambdas using those runtimes in the AWS console.
So even if you would open that ZIP, you would not find editable code.
Of course you could put the binary and the plain text code in that ZIP and then when you open that ZIP through your Chrome extension, you could show the plain text code to the user.
But then there is the matter of compiling the code into a binary that the AWS Lambda Go runtime can actually run.
So you Chrome extension would need to bundle a Go compiler. Not sure if that is possible. But I am sure it would not be trivial.
I'm currently having a problem with the fonts when I generate a PDF with wkhtmltopdf in Centos 7 on a normal hosting account. However, when I create the PDF in root I get no errors.
The error that I'm getting is:
Fontconfig error: Cannot load default config file
I checked the /etc/fonts/fonts.conf and it exists and it also has read privileges for everyone and I dont know what else coould be going on taking in account that it is working for root and not for the sub accounts.
The code I am using to generate the PDF:
wkhtmltopdf /rout/to/my.html /rout/to/my.pdf
The main problem is that the fonts aren't rendering and we always get the "Sans Serif" font as default. But the funny thing is that if I put the font as bold, it does render with the type of font that I need. In this case it's "Verdana".
Thanks in advance.
I had faced this problem with AWS Lambda today which is AWS Linux but cent OS from inside. Also, I found and successfully solved this problem so I think I should contribute to the community by answering this here.
First, it can be checked that if the font are available for that user, if not you can give path and provide your app fonts.
An easy to deploy implementation of HTML-pdf for AWS Lambda
But any phantom/wkhtmltopdf code throws Error: write EPIPE Next on this link all the required dependencies are posted which I think should be listed somewhere but aren't except this one. Also, the configuration is clearly explained
Aws Lambda PhontomJS dependencies for amazon Linux 2
Ok, so in my particular problem, it was not working because the hosting account had a "Jailed Shell" instead of a "Normal Shell".
This option can be changed in WHM for any specific account in the option "Manage Shell Access".
Hope this helps people in the future.
My Serverless image handler was working fine till now and now i'm getting the following error.
start_thumbor error: pycurl: libcurl link-time ssl backend (openssl) is different from compile-time ssl backend (nss)
This looks like a problem with the version of pycurl.
Please help me resolve it.
Tried changing the pythong version to 3.6 in ServerlessImageHandler lambda function configuration.
I found a discussion about that issue on https://forums.aws.amazon.com/thread.jspa?messageID=909444, which sent me to https://github.com/awslabs/serverless-image-handler/issues/127#issuecomment-514757029.
Github user timkelty has the solution:
go to my CloudFormation Stack
click Update
"replace template"
paste in https://cf-templates-nestrom.s3-eu-west-1.amazonaws.com/serverless-image-handler/1.0/serverless-image-handler.template
so far has worked for me in us-east-1 and us-west-1
AWS has released a new version of Serverless Image Handler this is why everybody suffers now because Thumbor functionalities fail in the new version.
In the new version, SharpJS is used instead of Thumbor API calls.
You can check the new version and download it from here.
Even though you are able to construct urls in old style, images in subfolders are not possible to access anymore without encoding the url.
Old way:
abcdef.cloudfront.net/team/team1.png
New way:
abcdef.cloudfront.net/{base64encodedPath}
Note 1: If your images are in the root directory of the bucket, you are still able to access them old style like this:
abcdef.cloudfront.net/team1.png
Note 2: If you update your existing CloudFormation stack, you will have your old cloudfront domain (which is a good part).
You can also follow the current fixes from here.
I have over 30 Leaflet maps hosted on my Google Cloud Platform bucket (for example) and it has always been an easy process to upload my folder (which includes an html file with sub-folders including .js and .css files) and share the map publicly.
I tried uploading another map today, but within the folder there are no files showing and I get the following message "There are no live objects in this folder. If you have object versioning enabled, this folder may contain archived versions of objects, which aren't visible in the console. You can list archived object versions using gsutil or the APIs."
Does anyone know what is going on here?
We have also seen this problem, and it seems that the issue is limited to buckets that have spaces in the name.
It's also not reproducible through the gcloud web console, but if you use gsutil to upload a file to a bucket with a space in the name then it won't be visible on the web UI.
I can see from your screenshot that your bucket also has spaces (%20 in the url).
If you need a workaround asap, you could rename your bucket...
But google should fix this soon, I hope.
There is currently open issue on GCS/Console integration
If files have any symbols that needs urlencoding - they are not visible in console - but accessible via gsutil/API (which is currently recommended as workaround)
Issue has been resolved as of 8-May-2018 10:00 UTC
This can happen if the file doesn't have an extension, the UI treats it as a folder and lets you navigate into it, showing a blank folder instead of the file contents.
We had the same symptom (files show up in API but invisible on the web and via CLI).
The issue turned out to be that we were saving files to "./uploads", which Google interprets as "create a directory literally called '.' and then a subdirectory called uploads."
The fix was to upload to "uploads/" instead of "./uploads". We also just ran a mass copy operation via the API for everything under "./uploads". All visible now!
I also had spaces in my url and it was not working properly yesterday. Checked this morning and everything is working as expected. I still have the spaces in my URL btw.