I am trying to deploy my nuxt static website to S3 using this guide.
https://nuxtjs.org/faq/deployment-aws-s3-cloudfront
The deployscript works when using which I tried an a personal AWS account:
AWS_ACCESS_KEY_ID="key"
AWS_SECRET_ACCESS_KEY="secret"
It does not work when unsetting these exports and using the AWS_PROFILE export on a separate AWS account. On this AWS I am not able to get an access key and secret because of company policy.
I also use these AWS profiles for other things so I am sure they are configured properly.
The error I am getting in the console is:
Error: Connect EHOSTUNREACH <EC2 IP address???>
The part in brackets is the IP address I am seeing. Which is weird where it tries to connect to EC2 since the script works on S2 and cloudfront.
The script I am using
#!/bin/bash
export AWS_PROFILE="profile_name"
export AWS_BUCKET_NAME="example.com"
export AWS_CLOUDFRONT="UPPERCASE"
# Load nvm (node version manager), install node (version in .nvmrc), and npm install packages
[ -s "$HOME/.nvm/nvm.sh" ] && source "$HOME/.nvm/nvm.sh" && nvm use
# Npm install if not already.
[ ! -d "node_modules" ] && npm install
npm run generate
gulp deploy
As for the gulpfile:
const gulp = require('gulp')
const awspublish = require('gulp-awspublish')
const cloudfront = require('gulp-cloudfront-invalidate-aws-publish')
const parallelize = require('concurrent-transform')
// https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html
const config = {
// Required
params: {
Bucket: process.env.AWS_BUCKET_NAME
},
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
signatureVersion: 'v3'
},
// Optional
deleteOldVersions: false, // NOT FOR PRODUCTION
distribution: process.env.AWS_CLOUDFRONT, // CloudFront distribution ID
region: process.env.AWS_DEFAULT_REGION,
headers: {
/* 'Cache-Control': 'max-age=315360000, no-transform, public', */
},
// Sensible Defaults - gitignore these Files and Dirs
distDir: 'dist',
indexRootPath: true,
cacheFileName: '.awspublish',
concurrentUploads: 10,
wait: true // wait for CloudFront invalidation to complete (about 30-60 seconds)
}
gulp.task('deploy', function () {
// create a new publisher using S3 options
// http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor-property
const publisher = awspublish.create(config)
let g = gulp.src('./' + config.distDir + '/**')
// publisher will add Content-Length, Content-Type and headers specified above
// If not specified it will set x-amz-acl to public-read by default
g = g.pipe(
parallelize(publisher.publish(config.headers), config.concurrentUploads)
)
// Invalidate CDN
if (config.distribution) {
console.log('Configured with CloudFront distribution')
g = g.pipe(cloudfront(config))
} else {
console.log(
'No CloudFront distribution configured - skipping CDN invalidation'
)
}
// Delete removed files
if (config.deleteOldVersions) {
g = g.pipe(publisher.sync())
}
// create a cache file to speed up consecutive uploads
g = g.pipe(publisher.cache())
// print upload updates to console
g = g.pipe(awspublish.reporter())
return g
})
The gulp-awspublish docs mention it should be possible to connect with an AWS profile by adding it to the export (which I do in my deploy file).
They also mention using the aws js sdk which I also tried by integrating following snippet.
var AWS = require("aws-sdk");
var publisher = awspublish.create({
region: "your-region-id",
params: {
Bucket: "..."
},
credentials: new AWS.SharedIniFileCredentials({ profile: "myprofile" })
});
When I use the export aws_profile it does at least seam to authenticate. When using the SDK I receive an error mentioning
CredentialsError: Missing Credentials in config, if using
AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
Where adding the latter (AWS_SDK_LOAD_CONFIG=1) to my deployment script does not make any difference.
Any Idea if I a missing something in the script to make it work?
My user policies where set as mentioned in the tutorial. Maybe they forgot something?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::example.com"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": ["arn:aws:s3:::example.com/*"]
},
{
"Effect": "Allow",
"Action": [
"cloudfront:CreateInvalidation",
"cloudfront:GetInvalidation",
"cloudfront:ListInvalidations",
"cloudfront:UnknownOperation"
],
"Resource": "*"
}
]
}
Since awspublish uses the javascript sdk I needed to export AWS_SDK_LOAD_CONFIG=true which solved the issue!
Related
I have a lambda which is attempting to put an object in an S3 bucket.
The code to configure the s3 client is as follows:
const configuration: S3ClientConfig = {
region: 'us-west-2',
};
if (process.env.DEVELOPMENT_MODE) {
configuration.credentials = {
accessKeyId: process.env.AWS_ACCESS_KEY!,
secretAccessKey: process.env.AWS_SECRET_KEY!,
}
}
export const s3 = new S3Client(configuration);
And the code to upload the file is as follows:
s3.send(new PutObjectCommand({
Bucket: bucketName,
Key: fileName,
ContentType: contentType,
Body: body,
}))
This works locally. The lambda's role includes a policy which in turn includes the following statement:
{
"Action": [
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*"
],
"Effect": "Allow"
}
However, when I invoke this lambda, it fails with the following stack trace
Error: Resolved credential object is not valid
at SignatureV4.validateResolvedCredentials (webpack://backend/../node_modules/#aws-sdk/signature-v4-multi-region/node_modules/#aws-sdk/signature-v4/dist-es/SignatureV4.js?:307:19)
at SignatureV4.eval (webpack://backend/../node_modules/#aws-sdk/signature-v4-multi-region/node_modules/#aws-sdk/signature-v4/dist-es/SignatureV4.js?:50:30)
at step (webpack://backend/../node_modules/tslib/tslib.es6.js?:130:23)
at Object.eval [as next] (webpack://backend/../node_modules/tslib/tslib.es6.js?:111:53)
at fulfilled (webpack://backend/../node_modules/tslib/tslib.es6.js?:101:58)
I'm using (what is currently) the latest javascript aws sdk, version 3.165.0. What am I missing here?
The problem is that I was trying to load the configuration credentials from environment variables instead of relying on the IAM role. Turns out process.env.DEVELOPMENT_MODE was resolving to the string 'true' instead of the boolean true.
if (process.env.DEVELOPMENT_MODE === 'true') {
configuration.credentials = {
accessKeyId: process.env.AWS_ACCESS_KEY!,
secretAccessKey: process.env.AWS_SECRET_KEY!,
}
}
I started a bare Expo app with expo init called MyVideoApp. Then I created an AWS account and in the terminal ran:
npm install -g #aws-amplify/cli
amplify configure
This signed me into the console, I went through the default steps and created an account in region:eu-west-2, username:amplify-user, pasted in the accessKeyId & secretAccessKey, profile name:amplify-user-profile.
cd ~/Documents/MyVideoApp/ & amplify init
? Enter a name for the project MyVideoApp
? Enter a name for the environment dev
? Choose your default editor: IntelliJ IDEA
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using react-native
? Source Directory Path: /
? Distribution Directory Path: /
? Build Command: npm run-script build
? Start Command: npm run-script start
Using default provider awscloudformation
? Do you want to use an AWS profile? Yes
? Please choose the profile you want to use amplify-user-profile
Adding backend environment dev to AWS Amplify Console app: d37chh30hholq6
amplify push
At this point I had an amplify folder in my project directory and an S3 bucket called amplify-myvideoapp-dev-50540-deployment. I uploaded an image into the bucket icon_1.png. And tried to download it from the app via a button click.
import React from 'react';
import { StyleSheet, Text, View, SafeAreaView, Button } from 'react-native';
import Amplify, { Storage } from 'aws-amplify';
import awsmobile from "./aws-exports";
Amplify.configure(awsmobile);
async function getImage() {
try {
let data = await Storage.get('icon_1.jpg')
} catch (err) {
console.log(err)
}
}
export default function App() {
return (
<SafeAreaView style={styles.container}>
<Text>Hello, World!</Text>
<Button title={"Click to Download!"} onPress={getImage}/>
</SafeAreaView>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
},
});
Output:
No credentials
[WARN] 18:54.93 AWSS3Provider - ensure credentials error, No Cognito Identity pool provided for unauthenticated access
...
So I setup (but maybe not correctly?) a user pool (my_first_pool) and an identity pool (myvidapp). This didn't help. Furthermore when I go into my bucket and click Permissions -> Bucket Policy, it's just empty ... not sure if that's okay if only owner is trying to access the bucket & it's contents.
I don't know what's wrong and what else to try. I essentially just want to authenticate my backend so anyone who git clones this code would just be able to run it and access the bucket.
Edit: aws-exports.js
/* eslint-disable */
// WARNING: DO NOT EDIT. This file is automatically generated by AWS Amplify. It will be overwritten.
const awsmobile = {
"aws_project_region": "eu-west-2"
};
export default awsmobile;
Since you've indicated that you're okay with all of the files in the S3 bucket being publicly accessible, I would suggest the following:
Select the bucket from in the AWS console (console.aws.amazon.com)
Under "Permissions" select "Block Public Access" and edit the settings by un-checking all of the options under and including "Block all public access", then save and confirm.
Go to the bucket policy, and paste in the following (Note: replace "YOUR_BUCKET_NAME_HERE" with "amplify-myvideoapp-dev-50540-deployment" first):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::[YOUR_BUCKET_NAME_HERE]/*"
]
}
]
}
I am quite new to Terraforms and gitlab CI and there is something that I am trying to do here with it.
I want to use Terraform to create an IAM user and a S3 bucket. Using policies allow certain operations on this S3 bucket to this IAM user. Have the IAM user's credentials saved in the artifactory.
Now the above is going to be my core module.
The core module looks something like the below:
Contents of : aws-s3-iam-combo.git
(The credentials for the IAM user using which all the Terraform would be run, say admin-user, would be stored in gitlab secrets.)
main.tf
resource "aws_s3_bucket" "bucket" {
bucket = "${var.name}"
acl = "private"
force_destroy = "true"
tags {
environment = "${var.tag_environment}"
team = "${var.tag_team}"
}
policy =<<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "${aws_iam_user.s3.arn}"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${var.name}",
"arn:aws:s3:::${var.name}/*"
]
}
]
}
EOF
}
resource "aws_iam_user" "s3" {
name = "${var.name}-s3"
force_destroy = "true"
}
resource "aws_iam_access_key" "s3" {
user = "${aws_iam_user.s3.name}"
}
resource "aws_iam_user_policy" "s3_policy" {
name = "${var.name}-policy-s3"
user = "${aws_iam_user.s3.name}"
policy =<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${var.name}",
"arn:aws:s3:::${var.name}/*"
]
}
]
}
EOF
}
outputs.tf
output "bucket" {
value = "${aws_s3_bucket.bucket.bucket}"
}
output "bucket_id" {
value = "${aws_s3_bucket.bucket.id}"
}
output "iam_access_key_id" {
value = "${aws_iam_access_key.s3.id}"
}
output "iam_access_key_secret" {
value = "${aws_iam_access_key.s3.secret}"
}
variables.tf
variable "name" {
type = "string"
}
variable "tag_team" {
type = "string"
default = ""
}
variable "tag_environment" {
type = "string"
default = ""
}
variable "versioning" {
type = "string"
default = false
}
variable "profile" {
type = "string"
default = ""
}
Anyone in the organization who now needs to create S3 buckets, would need to create a new repo, something of the form:
main.tf
module "aws-s3-john-doe" {
source = "git::https://git#gitlab-address/terraform/aws-s3-iam-combo.git?ref=v0.0.1"
name = "john-doe"
tag_team = "my_team"
tag_environment = "staging"
}
gitlab-ci.yml
stages:
- plan
- apply
plan:
image: hashicorp/terraform
stage: plan
script:
- terraform init
- terraform plan
apply:
image: hashicorp/terraform
stage: apply
script:
- terraform init
- terraform apply
when: manual
only:
- master
And then the pipeline would trigger and when this repo gets merged to master, the resources (S3 and IAM user) would be created and the user would have this IAM user's credentials.
Now the problem is that we have multiple AWS accounts. So say if a dev wants to create an S3 in a certain account, it would not be possible with the above set up as the admin-user, whose creds are in gitlab secrets, is only for one account alone.
Now I don't understand how do I achieve the above requirement of mine. I have the below idea: (Please suggest if there's a better way to do this)
Have multiple different creds set up in gitlab secrets for each AWS account in question
Take user input, specifying the AWS account they want the resources created in, as a variable. So something like say:
main.tf
module "aws-s3-john-doe" {
source = "git::https://git#gitlab-address/terraform/aws-s3-iam-combo.git?ref=v0.0.1"
name = "john-doe"
tag_team = "my_team"
tag_environment = "staging"
aws_account = "account1"
}
And then in the aws-s3-iam-combo.git main.tf somehow read the creds for account1 from the gitlab secrets.
Now I do not know how achieve the above, like how do i read from gitlab the required secret variable etc.
Can someone please help here?
you asked this some time ago, but maybe my idea still helps the one or the other...
You can do this with envsubst (requires the pkg gettext to be installed on your runner or in the Docker image used to run the pipeline).
Here is an example:
First, in the project settings you set your different user accounts as environment variables (project secrets:
SECRET_1: my-secret-1
SECRET_2: my-secret-2
SECRET_3: my-secret-3
Then, create a file that holds a Terraform variable, let's name it vars_template.tf:
variable "gitlab_secrets" {
description = "Variables from GitLab"
type = "map"
default = {
secret_1 = "$SECRET_1"
secret_2 = "$SECRET_2"
secret_3 = "$SECRET_3"
}
}
In your CI pipeline, you can now configure the following:
plan:dev:
stage: plan dev
script:
- envsubst < vars_template.tf > ./vars_envsubst.tf
- rm vars_template.tf
- terraform init
- terraform plan -out "planfile_dev"
artifacts:
paths:
- environments/dev/planfile_dev
- environments/dev/vars_envsubst.tf
apply:dev:
stage: apply dev
script:
- cd environments/dev
- rm vars_template.tf
- terraform init
- terraform apply -input=false "planfile_dev"
dependencies:
- plan:dev
It's important to note that the original vars_template.tf has to be deleted, otherwise Terraform will throw an error that the variable is defined multiple times. You could circumvent this by storing the template file in a directory which is outside the Terraform working directory though.
But from the Terraform state you can see that the variable values where correctly substituted:
"outputs": {
"gitlab_secrets": {
"sensitive": false,
"type": "map",
"value": {
"secret_1": "my-secret-1",
"secret_2": "my-secret-2",
"secret_3": "my-secret-3"
}
}
}
You can then access the values with "${vars.gitlab_secrets["secret_1"]}" in your Terraform resources etc.
UPDATE: Note that this method will store the secrets in the Terraform state file, which can be a potential security issue if the file is stored in an S3 bucket for collaborative work with Terraform. The bucket should at least be encrypted. In addition, it's recommended to limit the access to the files with ACLs so that, e.g., only a user terraform has access to it. And, of course, a user could reveil the secrets via Terraoform outputs...
I'm getting GraphQLError: Request failed with status code 401
I followed the automatic configuration instructions from:
https://aws.github.io/aws-amplify/media/api_guide#automated-configuration-with-cli
I tried looking, but there are a lack of resources for IAM. It looks like everything should be setup automatically, and done with the Amplify CLI after I put in the IAM access key and secret.
Is further setup required? Here is my code:
import Amplify, { API, graphqlOperation, Hub } from "aws-amplify";
import aws_config from "../../aws-exports";
Amplify.configure(aws_config);
const ListKeywords = `query ListKeywords {
listKeyword {
keyword {
id
name
}
}
}`;
const loop = async () => {
const allKeywords = await API.graphql(graphqlOperation(ListKeywords));
}
Could it also be because my GraphQL resolvers are not setup yet for ListKeywords?
If you're using IAM as the Authorization type on your AppSync API then the issue is the Cognito Role being used with the Auth category when invoking Amplify.configure() isn't granted permissions for GraphQL operations. It needs something like this attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"appsync:GraphQL"
],
"Resource": [
"arn:aws:appsync:us-west-2:123456789012:apis/YourGraphQLApiId/*"
]
}
]
}
More details here: https://docs.aws.amazon.com/appsync/latest/devguide/security.html
Not sure if this helps but I've been struggling with this for a while and found that if I add the API and use IAM as the auth method I need to add 'auth' to the schema too.
See below:
type TimeLapseCamera #model
#auth(rules: [
{ allow: private, provider: iam }
])
{
...
}
I just tested this and my web page is successfully adding a record.
Note to other comment; I do not have AWS at all in this - its a simple VUE app with Amplify.
I just changed ~/.aws/credentials and now it's working.
Looks like even if you have project specific configuration via Amplify's command line tools or ~/.awsmobile/aws-config.js, it still relies on ~/.aws
I've set my AWS Elasticsearch instance so that anyone can do anything (create, delete, search, etc.) to it.
These are my permissions (replace $myARN with my Elasticsearch ARN):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "$myARN"
}
]
}
When I PUT a new index:
PUT http://my-elasticsearch-domain.us-west-2.es.amazonaws.com/index-name
Or I DELETE an index:
DELETE http://my-elasticsearch-domain.us-west-2.es.amazonaws.com/index-name
I get this:
{
"acknowledged": true
}
Which means I can create and delete indexes but when I try to POST a reindex I get:
{
"Message": "Your request: '/_reindex' is not allowed."
}
Do I have to sign this request? Why should I have to sign this request but not creating or deleting indexes?
The reason is simply because the Amazon Elasticsearch Service is a kind of restricted environment where you don't have access to the full range of services and endpoints provided by a barebone install of Elasticsearch.
You can check the list of endpoints that you're allowed to use on the Amazon Elasticsearch Service and _reindex is not part of that list.
UPDATE
There's another way to achieve what you want, though. By leveraging Logstash, you can source the data from ES, apply any transformation you wish and sink it back to ES.
input {
elasticsearch {
hosts => ["my-elasticsearch-domain.us-west-2.es.amazonaws.com:80"]
index => "index-name"
docinfo => true
}
}
filter {
mutate {
remove_field => [ "#version", "#timestamp" ]
}
# add other transformations here
}
output {
elasticsearch {
hosts => ["my-elasticsearch-domain.us-west-2.es.amazonaws.com:80"]
manage_template => false
index => "%{[#metadata][_index]}"
document_type => "%{[#metadata][_type]}"
document_id => "%{[#metadata][_id]}"
}
}
Reindex feature will not be available in previous versions 1.5 and 2.3. So currently if you use the versions 1.5 or 2.3, it would be good for you to move on to the latest ES version so that you will get better indexing performance and other features which are not supported in previous versions.
Also have a look into the below link to know more the APIs which are supported in different versions of AWS Elasticsearch. If you look into the 5.1 section you can the “_reindex” is listed there.
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-es-operations.html#es_version_5_1
I was able to do this using the following tool
taskrabbit/elasticsearch-dump
After installing it, you can run this on the command line:
elasticdump \
--input=http://es.com:9200/api/search \
--input-index=my_index \
--output=http://es.com:9200/api/search \
--output-index=my_index \
--type=mapping
NOTE: I did have to use the --awsChain option to find my credentials.