Digitalocean spaces with Elixir - amazon-web-services

I'm trying to find a aws-client for elixir that can be used with digitalocean spaces.
I tried aws-elixir (since it allowed different endpoint), but I can't find a way to do S3 operations.
I ask,
How do I handle S3 bucket from aws-elixir?
If aws-elixir doesn't work, What's the best solution for my situation?

aws-elixir does not support S3 unfortunately, but ExAws does. In order to use ExAws, you first need to add these dependencies in your mix.exs file:
defp deps() do
[
{:ex_aws, "~> 2.0"},
{:ex_aws_s3, "~> 2.0"},
{:poison, "~> 3.0"},
{:hackney, "~> 1.9"},
{:sweet_xml, "~> 0.6"},
]
end
Note that both ex_aws and ex_aws_s3 need to be added to your dependencies. hackney is an HTTP client, poison is for JSON parsing, and sweet_xml is for XML parsing.
Now that you added the dependencies, next you need to configure S3 to connect to DigitalOcean spaces instead.
Type this into your config.exs file:
config :ex_aws, :s3,
%{
access_key_id: "access key",
secret_access_key: "secret key",
scheme: "https://",
host: %{"sfo2" => "your-space-name.sfo2.digitaloceanspaces.com"},
region: "sfo2"
}
"access key" and "secret key" need to be replaced with the actual keys you get from DigitalOcean.
Please make sure to replace "sfo2" with the actual Spaces region you're using. And of course, put your actual space name instead of your-space-name.
Don't forget to run mix deps.get, and you're all set.
You can start an iex session and verify that all is working, by running iex -S mix, and then typing:
ExAws.S3.list_objects("bucket") |> ExAws.request!

Related

Cloud Build fails with resource location constraint

We have a policy in place which restricts resources to EU regions.
When I try to execute a cloud build, gcloud wants to create a bucket (gs://[PROJECT_ID]_cloudbuild) to store staging sources. This step fails, because the default bucket location ('us') is used:
"code": 412,
"message": "'us' violates constraint ‘constraints/gcp.resourceLocations’"
As a workaround I tried to pass an existing bucket located in a valid region (using --gcs-source-staging-dir), but I got the same error.
How can this be solved?
Here the HTTP logs:
$ gcloud --log-http builds submit --gcs-source-staging-dir gs://my-custom-bucket/staging \
--tag gcr.io/xxxxxxxxxx/quickstart-image .
=======================
==== request start ====
uri: https://www.googleapis.com/storage/v1/b?project=xxxxxxxxxx&alt=json
method: POST
== headers start ==
accept: application/json
content-type: application/json
== headers end ==
== body start ==
{"name": "my-custom-bucket"}
== body end ==
==== request end ====
---- response start ----
-- headers start --
server: UploadServer
status: 412
-- headers end --
-- body start --
{
"error": {
"errors": [
{
"domain": "global",
"reason": "conditionNotMet",
"message": "'us' violates constraint ‘constraints/gcp.resourceLocations’",
"locationType": "header",
"location": "If-Match"
}
],
"code": 412,
"message": "'us' violates constraint ‘constraints/gcp.resourceLocations’"
}
}
-- body end --
---- response end ----
----------------------
ERROR: (gcloud.builds.submit) HTTPError 412: 'us' violates constraint ‘constraints/gcp.resourceLocations’
I found the solution to this problem. After you create the project (with resource location restriction enabled), you should create a new bucket with the name [PROJECT_ID]_cloudbuild in the preferred location.
When you don't do this, cloud build submit will automatically create a bucket in the US, this is not configurable. And because of the resource restrictions, cloud build is not able to create this bucket in the US and this fails with the following error:
ERROR: (gcloud.builds.submit) HTTPError 412: 'us' violates constraint 'constraints/gcp.resourceLocations'
When you create the bucket (by hand) with the same name, cloudbuild will take that bucket as default bucket. The solution was not immediately visible, because for projects that already had the cloudbuild bucket in place when the resource restrictions were applied, the problem did not appear.
Cloud Build will use a default bucket to store logs. You can try to add logsBucket field to the build config file with a specific bucket like in the snippet:
steps:
- name: 'gcr.io/cloud-builders/go'
args: ['install', '.']
logsBucket: 'gs://mybucket'
You can find detailed information about the build configuration file here
I had this issue investigated by Google Support. They came up with the following answer:
After investigating with the different teams involved in this issue,
it seems like the restrictions affecting Cloud Build (in this case,
preventing resource allocation in US) is intended. The only workaround
that came up from the investigation was the creation of a new project
without the restrictions in place that allows the use of Cloud Build.
They also referred to a feature request (already mentioned by in #Christopher P 's comment - thanks for that)
I faced the same issue with location constraints , follow the below steps to fix it
After logging to gcp console, select the required project where you are facing the issue
Then choose "Organization policies" from the IAM console. Make sure you have the necessary permissions for it to get listed
Look for the policy "Google Cloud Platform - Resource Location Restriction" policy in the list. Typically it comes under the category of custom policy
Please refer the this screenshot:
location constraint
Click on the policy, click on edit, drag down in custom values add the location which you want.
using prefix in means it includes everything to mention specific value use location name with out prefix
here " 'us' violates constraint ‘constraints/gcp.resourceLocations"
Enter value us in the custom value add it and save , if it requires all zone in us-east mention prefix in like this
in:us-east4-locations
hope this works
allowed values

Terraform and AWS: No Configuration Files Found Error

I am writing a small script that takes a small file from my local machine and puts it into an AWS S3 bucket.
My terraform.tf:
provider "aws" {
region = "us-east-1"
version = "~> 1.6"
}
terraform {
backend "s3" {
bucket = "${var.bucket_testing}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
key = "testexport/exportFile.tfstate"
region = "us-east-1"
encrypt = true
}
}
data "aws_s3_bucket" "pr-ip" {
bucket = "${var.bucket_testing}"
}
resource "aws_s3_bucket_object" "put_file" {
bucket = "${data.aws_s3_bucket.pr-ip.id}"
key = "${var.file_path}/${var.file_name}"
source = "src/Datafile.txt"
etag = "${md5(file("src/Datafile.txt"))}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
server_side_encryption = "aws:kms"
}
However, when I init:
terraform init
#=>
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working with Terraform immediately by creating Terraform configuration files.
and then try to apply:
terraform apply
#=>
Error: No configuration files found!
Apply requires configuration to be present. Applying without a configuration would mark everything for destruction, which is normally not what is desired. If you would like to destroy everything, please run 'terraform destroy' instead which does not require any configuration files.
I get the error above. Also, I have setup my default AWS Access Key ID and value.
What can I do?
This error means that you have run the command in the wrong place. You have to be in the directory that contains your configuration files, so before running init or apply you have to cd to your Terraform project folder.
Error: No configuration files found!
The above error arises when you are not present in the folder, which contains your configuration file.
To remediate the situation you can create a .tf in your project folder you will be working.
Note - An empty .tf will also eliminate the error, but will be of limited use as it does not contain provider info.
See the example below:-
provider "aws" {
region = "us-east" #Below value will be asked when the terraform apply command is executed if not provided here
}
So, In order for the successful execution of the terraform apply command you need to make sure the below points:-
You need to be present in your terraform project folder (Can be any directory).
Must contain .tf preferably should contain terraform provider info.
Execute terraform init to initialize the backend & provider plugin.
you are now good to execute terraform apply (without any no config error)
In case any one comes across this now, I ran into an issue where my TF_WORSPACE env var was set to a different workspace than the directory I was in. Double check your workspace with
terraform workspace show
to show your available workspaces
terraform workspace list
to use one of the listed workspaces:
terraform workspace select <workspace name>
If the TF_WORKSPACE env var is set when you try to use terraform workspace select TF will print a message telling you of the potential issue:
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.
I had the same error emulated by you, In my case it was not a VPN error but incorrect file
system naming. I was in the project folder.To remedy the situation, i created a .tf file
with vim editor with the command vi aws.tf, then populated the file with defined variables. Mine is working.
See my attached images
I too had the same issue, remember terraform filename should end with .tf as extension
Another possible reason could be if you are using modules where the URL is incorrect.
When I had:
source = "git::ssh://git#git.companyname.com/observability.git//modules/ec2?ref=v2.0.0"
instead of:
source = "git::ssh://git#git.companyname.com/observability.git//terraform/modules/ec2?ref=v2.0.0"
I was seeing the same error message as you.
I got this error this morning when deploying to production, on a project which has been around for years and nothing had changed. We finally traced it down to the person who created the production deploy ticket had pasted this command into an email using Outlook:
terraform init --reconfigure
Microsoft, in its infinite wisdom, combined the two hyphens into one and the one hyphen wasn't even the standard ASCII hyphen character (I think it's called an "en-dash"):
terraform init –reconfigure
This caused Terraform 0.12.31 to give the helpful error message:
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
It took us half an hour and another pair of eyes to notice that the hyphens were incorrect and needed to be re-typed! (I think terraform thought "reconfigure" was the name of the directory we wanted to run the init in, which of course didn't exist. Perhaps terraform could be improved to name the directory it's looking in when it reports this error?)
Thanks Microsoft for always being helpful (not)!

Can't close ElasticSearch index on AWS?

I've created a new AWS ElasticSearch domain, for testing. I use ES on a different host right now, and I'm looking to move to AWS.
One thing I need to do is set the mapping (analyzers) on my instance. In order to do this, I need to "close" the index, or else ES will just raise an exception.
Whenever I try to close the index, though, I get an exception from AWS:
Your request: '/_all/_close' is not allowed by CloudSearch.
The AWS ES documentation specifically says to do this in some cases:
curl -XPOST 'http://search-weblogs-abcdefghijklmnojiu.us-east-1.a9.com/_all/_close'
I haven't found any documentation that says why I wouldn't be able to close my indices on AWS ES, nor have I found anyone else who has this problem.
It's also a bit strange that I've got an ElasticSearch domain, but it's giving me a CloudSearch error message, since I thought those were different services, though I suppose one is implemented in terms of the other.
thanks!
AWS Elasticsearch does not support the "close" operation on indexes.
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains.html
"Currently, Amazon ES does not support the Elasticsearch _close API"
According to the AWS document I found recently, you have to first upgrade your elastic search domain to version 7.4 or greater.
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-handling-errors.html#aes-troubleshooting-close-api
Since closing all indices at once is a dangerous action, it is maybe disabled by default on your cluster. You need to make sure that your elasticsearch.yml configuration file doesn't contain this:
action.destructive_requires_name: true
You can either set this in your configuration file and restart your cluster, but I strongly advise against that since this opens the door to all kinds of other destructive actions, like deleting all your indices at once.
action.destructive_requires_name: false
What you should do instead is to temporarily update the cluster settings using
curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"action.destructive_requires_name" : false
}
}'
Then close all your indices
curl -XPOST localhost:9200/_all/_close
And then reset the settings to a safer value:
curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"action.destructive_requires_name" : true
}
}'

aws ec2 request-spot-instances CLI issues

Trying to start a couple of spot instances within a simple script, and the syntax supplied in the AWS documentation and aws ec2 request-spot-instances help output is listed in either JAVA or JSON syntax. How does one enter the parameters under the JSON syntax from inside a shell script?
aws --version
aws-cli/1.2.6 Python/2.6.5 Linux/2.6.21.7-2.fc8xen
aws ec2 request-spot-instances help
-- at the start of "launch specification" it lists JSON syntax
--launch-specification (structure)
Specifies additional launch instance information.
JSON Syntax:
{
"ImageId": "string",
"KeyName": "string",
}, ....
"EbsOptimized": true|false,
"SecurityGroupIds": ["string", ...],
"SecurityGroups": ["string", ...]
}
I have tried every possible combination of the following, adding & moving brackets, quotes, changing options, etc, all to no avail. What would be the correct formatting of the variable $launch below to have this work? Other command variations -- "ec2-request-spot-instances" are not working in my environment, nor does it work if I try to substitute --spot-price with -p.
#!/bin/bash
launch="{"ImageId":"ami-a999999","InstanceType":"c1.medium"} "SecurityGroups":"launch-wizard-6""
echo $launch
aws ec2 request-spot-instances --spot-price 0.01 --instance-count 1 --type c1.small --launch-specification $launch
This provides result:
Unknown options: SecurityGroups:launch-wizard-6
Substituting the security group number has the same result.
aws ec2 describe-instances works perfectly, as does aws ec2 start-instance, so the environment and account information are properly setup, but I need to utilize spot pricing.
In fact, nothing is working as listed in this user documentation: http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RequestSpotInstances.html
Thank you,
I know this is an old question, but in case somebody runs into it. I had the same issue recently with the CLI. It was very hard to get all the parameters to work correctly for request-spot-instances
#!/bin/bash
AWS_DEFAULT_OUTPUT="text"
UserData=$(base64 < userdata-current)
region="us-west-2"
price="0.03"
zone="us-west-2c"
aws ec2 request-spot-instances --region $region --spot-price $price --launch-specification "{ \"KeyName\": \"YourKey\", \"ImageId\": \"ami-3d50120d\" , \"UserData\": \"$UserData\", \"InstanceType\": \"r3.large\" , \"Placement\": {\"AvailabilityZone\": \"$zone\"}, \"IamInstanceProfile\": {\"Arn\": \"arn:aws:iam::YourAccount:YourProfile\"}, \"SecurityGroupIds\": [\"YourSecurityGroupId\"],\"SubnetId\": \"YourSubnectId\" }"
Basically what I had to do is put my user data in an external file, load it into the UserData variable and then pass that on the command line. Trying to get everything on the command line or using an external file for the ec2-request-spot-instances just kept failing. Note that other commands worked just fine, so this is specific to the ec2-request-spot-instances.
I detailed more about what i ended up doing here.
You have to use a list in this case:
"SecurityGroups": ["string", ...]
so
"SecurityGroups":"launch-wizard-6"
become
"SecurityGroups":["launch-wizard-6"]
Anyway, I'm dealing with the CLI right now and I found more useful to use a external JSON
Here is an example using Python:
myJson="file:///Users/xxx/Documents/Python/xxxxx/spotInstanceInformation.json"
x= subprocess.check_output(["/usr/local/bin/aws ec2 request-spot-instances --spot-price 0.2 --launch-specification "+myJson],shell=True)
print x
And the output is:
"SpotInstanceRequests": [
{
"Status": {
"UpdateTime": "2013-12-09T02:41:41.000Z",
"Code": "pending-evaluation",
"Message": "Your Spot request has been submitted for review, and is pending evaluation."
etc etc ....
Doc is here : http://docs.aws.amazon.com/cli/latest/reference/ec2/request-spot-instances.html
FYI - I'm appending file:/// because I'm using MAC. If you are launching your bash script using Linux, you could just use myJson="/path/to/file/"
The first problem, here, is quoting and formatting:
$ launch="{"ImageId":"ami-a999999","InstanceType":"c1.medium"} "SecurityGroups":"launch-wizard-6""
This isn't going to generate valid JSON, because the block you copied from the help file includes a spurious closing brace from a nested object that you didn't include, the closing brace is missing, and the unescaped double quotes are disappearing.
But we're not really getting to the point where the json is actually being validated, because with that space after the last brace, the cli is assuming that SecurityGroups and launch-wizard-6 are more command line options following the argument to --launch-specification:
$ echo $launch
{ImageId:ami-a999999,InstanceType:c1.medium} SecurityGroups:launch-wizard-6
That's probably not what you expected... so we'll fix the quoting so that it looks like one long argument, after the json is valid:
From the perspective of just generating valid json structures (not necessarily content), the data you are most likely trying to send would actually look like this, based on the docs:
{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}
Check that as structurally valid JSON, here.
Fixing the bracing, commas, and bracketing, the CLI stops throwing that error, with this formatting:
$ launch='{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}'
$ echo $launch
{"ImageId":"ami-a999999","InstanceType":"c1.medium","SecurityGroups":["launch-wizard-6"]}
That isn't to say the API might not subsequently reject the request due to something else incorrect or missing, but you were never actually getting to the point of sending anything to the API; this was failing local validation in the command line tools.

Force CloudFront distribution/file update

I'm using Amazon's CloudFront to serve static files of my web apps.
Is there no way to tell a cloudfront distribution that it needs to refresh it's file or point out a single file that should be refreshed?
Amazon recommend that you version your files like logo_1.gif, logo_2.gif and so on as a workaround for this problem but that seems like a pretty stupid solution. Is there absolutely no other way?
Good news. Amazon finally added an Invalidation Feature. See the API Reference.
This is a sample request from the API Reference:
POST /2010-08-01/distribution/[distribution ID]/invalidation HTTP/1.0
Host: cloudfront.amazonaws.com
Authorization: [AWS authentication string]
Content-Type: text/xml
<InvalidationBatch>
<Path>/image1.jpg</Path>
<Path>/image2.jpg</Path>
<Path>/videos/movie.flv</Path>
<CallerReference>my-batch</CallerReference>
</InvalidationBatch>
As of March 19, Amazon now allows Cloudfront's cache TTL to be 0 seconds, thus you (theoretically) should never see stale objects. So if you have your assets in S3, you could simply go to AWS Web Panel => S3 => Edit Properties => Metadata, then set your "Cache-Control" value to "max-age=0".
This is straight from the API documentation:
To control whether CloudFront caches an object and for how long, we recommend that you use the Cache-Control header with the max-age= directive. CloudFront caches the object for the specified number of seconds. (The minimum value is 0 seconds.)
With the Invalidation API, it does get updated in a few of minutes.
Check out PHP Invalidator.
Bucket Explorer has a UI that makes this pretty easy now. Here's how:
Right click your bucket. Select "Manage Distributions."
Right click your distribution. Select "Get Cloudfront invalidation list"
Then select "Create" to create a new invalidation list.
Select the files to invalidate, and click "Invalidate." Wait 5-15 minutes.
Automated update setup in 5 mins
OK, guys. The best possible way for now to perform automatic CloudFront update (invalidation) is to create Lambda function that will be triggered every time when any file is uploaded to S3 bucket (a new one or rewritten).
Even if you never used lambda functions before, it is really easy -- just follow my step-by-step instructions and it will take just 5 mins:
Step 1
Go to https://console.aws.amazon.com/lambda/home and click Create a lambda function
Step 2
Click on Blank Function (custom)
Step 3
Click on empty (stroked) box and select S3 from combo
Step 4
Select your Bucket (same as for CloudFront distribution)
Step 5
Set an Event Type to "Object Created (All)"
Step 6
Set Prefix and Suffix or leave it empty if you don't know what it is.
Step 7
Check Enable trigger checkbox and click Next
Step 8
Name your function (something like: YourBucketNameS3ToCloudFrontOnCreateAll)
Step 9
Select Python 2.7 (or later) as Runtime
Step 10
Paste following code instead of default python code:
from __future__ import print_function
import boto3
import time
def lambda_handler(event, context):
for items in event["Records"]:
path = "/" + items["s3"]["object"]["key"]
print(path)
client = boto3.client('cloudfront')
invalidation = client.create_invalidation(DistributionId='_YOUR_DISTRIBUTION_ID_',
InvalidationBatch={
'Paths': {
'Quantity': 1,
'Items': [path]
},
'CallerReference': str(time.time())
})
Step 11
Open https://console.aws.amazon.com/cloudfront/home in a new browser tab and copy your CloudFront distribution ID for use in next step.
Step 12
Return to lambda tab and paste your distribution id instead of _YOUR_DISTRIBUTION_ID_ in the Python code. Keep surrounding quotes.
Step 13
Set handler: lambda_function.lambda_handler
Step 14
Click on the role combobox and select Create a custom role. New tab in browser will be opened.
Step 15
Click view policy document, click edit, click OK and replace role definition with following (as is):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"cloudfront:CreateInvalidation"
],
"Resource": [
"*"
]
}
]
}
Step 16
Click allow. This will return you to a lambda. Double check that role name that you just created is selected in the Existing role combobox.
Step 17
Set Memory (MB) to 128 and Timeout to 5 sec.
Step 18
Click Next, then click Create function
Step 19
You are good to go! Now on, each time you will upload/reupload any file to S3, it will be evaluated in all CloudFront Edge locations.
PS - When you are testing, make sure that your browser is loading images from CloudFront, not from local cache.
PSS - Please note, that only first 1000 files invalidation per month are for free, each invalidation over limit cost $0.005 USD. Also additional charges for Lambda function may apply, but it is extremely cheap.
If you have boto installed (which is not just for python, but also installs a bunch of useful command line utilities), it offers a command line util specifically called cfadmin or 'cloud front admin' which offers the following functionality:
Usage: cfadmin [command]
cmd - Print help message, optionally about a specific function
help - Print help message, optionally about a specific function
invalidate - Create a cloudfront invalidation request
ls - List all distributions and streaming distributions
You invaliate things by running:
$sam# cfadmin invalidate <distribution> <path>
one very easy way to do it is FOLDER versioning.
So if your static files are hundreds for example, simply put all of them into a folder called by year+versioning.
for example i use a folder called 2014_v1 where inside i have all my static files...
So inside my HTML i always put the reference to the folder. ( of course i have a PHP include where i have set the name of the folder. ) So by changing in 1 file it actually change in all my PHP files..
If i want a complete refresh, i simply rename the folder to 2014_v2 into my source and change inside the php include to 2014_v2
all HTML automatically change and ask the new path, cloudfront MISS cache and request it to the source.
Example:
SOURCE.mydomain.com is my source,
cloudfront.mydomain.com is CNAME to cloudfront distribution.
So the PHP called this file
cloudfront.mydomain.com/2014_v1/javascript.js
and when i want a full refresh, simply i rename folder into the source to "2014_v2" and i change the PHP include by setting the folder to "2014_v2".
Like this there is no delay for invalidation and NO COST !
This is my first post in stackoverflow, hope i did it well !
In ruby, using the fog gem
AWS_ACCESS_KEY = ENV['AWS_ACCESS_KEY_ID']
AWS_SECRET_KEY = ENV['AWS_SECRET_ACCESS_KEY']
AWS_DISTRIBUTION_ID = ENV['AWS_DISTRIBUTION_ID']
conn = Fog::CDN.new(
:provider => 'AWS',
:aws_access_key_id => AWS_ACCESS_KEY,
:aws_secret_access_key => AWS_SECRET_KEY
)
images = ['/path/to/image1.jpg', '/path/to/another/image2.jpg']
conn.post_invalidation AWS_DISTRIBUTION_ID, images
even on invalidation, it still takes 5-10 minutes for the invalidation to process and refresh on all amazon edge servers
current AWS CLI support invalidation in preview mode. Run the following in your console once:
aws configure set preview.cloudfront true
I deploy my web project using npm. I have the following scripts in my package.json:
{
"build.prod": "ng build --prod --aot",
"aws.deploy": "aws s3 sync dist/ s3://www.mywebsite.com --delete --region us-east-1",
"aws.invalidate": "aws cloudfront create-invalidation --distribution-id [MY_DISTRIBUTION_ID] --paths /*",
"deploy": "npm run build.prod && npm run aws.deploy && npm run aws.invalidate"
}
Having the scripts above in place you can deploy your site with:
npm run deploy
Set TTL=1 hour and replace
http://developer.amazonwebservices.com/connect/ann.jspa?annID=655
Just posting to inform anyone visiting this page (first result on 'Cloudfront File Refresh')
that there is an easy-to-use+access online invalidator available at swook.net
This new invalidator is:
Fully online (no installation)
Available 24x7 (hosted by Google) and does not require any memberships.
There is history support, and path checking to let you invalidate your files with ease. (Often with just a few clicks after invalidating for the first time!)
It's also very secure, as you'll find out when reading its release post.
Full disclosure: I made this. Have fun!
Go to CloudFront.
Click on your ID/Distributions.
Click on Invalidations.
Click create Invalidation.
In the giant example box type * and click invalidate
Done
If you are using AWS, you probably also use its official CLI tool (sooner or later). AWS CLI version 1.9.12 or above supports invalidating a list of file names.
Full disclosure: I made this. Have fun!