S3 file not downloaded when triggering a Lambda function associated with EFS - amazon-web-services

I'm using the Serverless framework to create a Lambda function that, when triggered by an S3 upload (uploading test.vcf to s3://trigger-test/uploads/), downloads that uploaded file from S3 to EFS (specifically to the /mnt/efs/vcfs/ folder). I'm pretty new to EFS and followed AWS documentation for setting up the EFS access point, but when I deploy this application and upload a test file to trigger the Lambda function, it fails to download the file and gives this error in the CloudWatch logs:
[ERROR] FileNotFoundError: [Errno 2] No such file or directory: '/mnt/efs/vcfs/test.vcf.A0bA45dC'
Traceback (most recent call last):
File "/var/task/handler.py", line 21, in download_files_to_efs
result = s3.download_file('trigger-test', key, efs_loci)
File "/var/runtime/boto3/s3/inject.py", line 170, in download_file
return transfer.download_file(
File "/var/runtime/boto3/s3/transfer.py", line 307, in download_file
future.result()
File "/var/runtime/s3transfer/futures.py", line 106, in result
return self._coordinator.result()
File "/var/runtime/s3transfer/futures.py", line 265, in result
raise self._exception
File "/var/runtime/s3transfer/tasks.py", line 126, in __call__
return self._execute_main(kwargs)
File "/var/runtime/s3transfer/tasks.py", line 150, in _execute_main
return_value = self._main(**kwargs)
File "/var/runtime/s3transfer/download.py", line 571, in _main
fileobj.seek(offset)
File "/var/runtime/s3transfer/utils.py", line 367, in seek
self._open_if_needed()
File "/var/runtime/s3transfer/utils.py", line 350, in _open_if_needed
self._fileobj = self._open_function(self._filename, self._mode)
File "/var/runtime/s3transfer/utils.py", line 261, in open
return open(filename, mode)
My hunch is that this has to do with the local mount path specified in the Lambda function versus the Root directory path in the Details portion of the EFS access point configuration. Ultimately, I want the test.vcf file I upload to S3 to be downloaded to the EFS folder: /mnt/efs/vcfs/.
Relevant files:
serverless.yml:
service: LambdaEFS-trigger-test
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
stage: dev
region: us-west-2
vpc:
securityGroupIds:
- sg-XXXXXXXX
- sg-XXXXXXXX
- sg-XXXXXXXX
subnetIds:
- subnet-XXXXXXXXXX
functions:
cfnPipelineTrigger:
handler: handler.download_files_to_efs
description: Lambda to download S3 file to EFS folder.
events:
- s3:
bucket: trigger-test
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .vcf
existing: true
fileSystemConfig:
localMountPath: /mnt/efs
arn: arn:aws:elasticfilesystem:us-west-2:XXXXXXXXXX:access-point/fsap-XXXXXXX
iamRoleStatements:
- Effect: Allow
Action:
- s3:ListBucket
Resource:
- arn:aws:s3:::trigger-test
- Effect: Allow
Action:
- s3:GetObject
- s3:GetObjectVersion
Resource:
- arn:aws:s3:::trigger-test/uploads/*
- Effect: Allow
Action:
- elasticfilesystem:ClientMount
- elasticfilesystem:ClientWrite
- elasticfilesystem:ClientRootAccess
Resource:
- arn:aws:elasticfilesystem:us-west-2:XXXXXXXXXX:file-system/fs-XXXXXX
plugins:
- serverless-iam-roles-per-function
package:
individually: true
exclude:
- '**/*'
include:
- handler.py
handler.py:
import json
import boto3
s3 = boto3.client('s3', region_name = 'us-west-2')
def download_files_to_efs(event, context):
"""
Locates the S3 file name (i.e. S3 object "key" value) the initiated the Lambda call, then downloads the file
into the locally attached EFS drive at the target location.
:param: event | S3 event record
:return: dict
"""
print(event)
key = event.get('Records')[0].get('s3').get('object').get('key') # bucket: trigger-test, key: uploads/test.vcf
efs_loci = f"/mnt/efs/vcfs/{key.split('/')[-1]}" # '/mnt/efs/vcfs/test.vcf
print("key: %s, efs_loci: %s" % (key, efs_loci))
result = s3.download_file('trigger-test', key, efs_loci)
if result:
print('Download Success...')
else:
print('Download failed...')
return { 'status_code': 200 }
EFS Access Point details:
Details
Root directory path: /vcfs
POSIX
USER ID: 1000
Group ID: 1000
Root directory creation permissions
Owner User ID: 1000
Owner Group ID: 1000
POSIX permissions to apply to the root directory path: 777

Your local path is localMountPath: /mnt/efs. So in your code you should be using only this path (not /mnt/efs/vcfs):
efs_loci = f"/mnt/efs/{key.split('/')[-1]}" # '/mnt/efs/test.vcf

Related

Confluent Schema Registry on Strimzi - pods not getting created

I've Strimzi Kafka installed on GKE(GCP), and i'm trying to install Confluent Schema registry referring link -
https://github.com/lsst-sqre/strimzi-registry-operator
Steps followed:
Installed strimzi-registry-operator in namespace schema-registry-operator,
Note : Strimzi Kafka is installed in namespace - kafka
Command used :
helm repo add lsstsqre https://lsst-sqre.github.io/charts/
helm repo update
helm install ssr lsstsqre/strimzi-registry-operator -n schema-registry-operator --values values.yaml
values.yaml:
------------
# -- Name of the Strimzi Kafka cluster
clusterName: "versa-kafka-gke"
# -- Namespace where the Strimzi Kafka cluster is deployed
clusterNamespace: "kafka"
# -- Namespace where the strimzi-registry-operator is deployed
operatorNamespace: "strimzi-registry-operator"
Step 2:
Installed the kafkatopic (registry-schemas),kafkauser in schema - 'kafka'
(Note : Strimzi kafka is also installed in the namespace - kafka)
kafkatopic.yaml
----------------
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: registry-schemas
labels:
strimzi.io/cluster: versa-kafka-gke
spec:
partitions: 1
replicas: 3
config:
# http://kafka.apache.org/documentation/#topicconfigs
cleanup.policy: compact
kafkauser.yaml
--------------
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: confluent-schema-registry
labels:
strimzi.io/cluster: versa-kafka-gke
spec:
authentication:
type: tls
authorization:
# Official docs on authorizations required for the Schema Registry:
# https://docs.confluent.io/current/schema-registry/security/index.html#authorizing-access-to-the-schemas-topic
type: simple
acls:
# Allow all operations on the registry-schemas topic
# Read, Write, and DescribeConfigs are known to be required
- resource:
type: topic
name: registry-schemas
patternType: literal
operation: All
type: allow
# Allow all operations on the schema-registry* group
- resource:
type: group
name: schema-registry
patternType: prefix
operation: All
type: allow
# Allow Describe on the __consumer_offsets topic
- resource:
type: topic
name: __consumer_offsets
patternType: literal
operation: Describe
type: allow
Step3 :
Installed StrimziSchemaRegistry in schema - strimzi-schema-operator
Here is what i see in the schema svchema-registry-operator:
(base) Karans-MacBook-Pro:schema-registry-yamls karanalang$ kc get all -n schema-registry-operator
NAME READY STATUS RESTARTS AGE
pod/strimzi-registry-operator-7867fbc985-rddqw 1/1 Running 0 121m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/strimzi-registry-operator 1/1 1 1 121m
NAME DESIRED CURRENT READY AGE
replicaset.apps/strimzi-registry-operator-7867fbc985 1 1 1 121m
Also, when i logon to the SchemaRegistryOperator pod, i see the following error.
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/aiokits/aiotasks.py", line 108, in guard
await coro
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/reactor/queueing.py", line 175, in watcher
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 82, in infinite_watch
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 159, in continuous_watch
objs, resource_version = await fetching.list_objs(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/fetching.py", line 28, in list_objs
rsp = await api.get(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 111, in get
response = await request(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/auth.py", line 45, in wrapper
return await fn(*args, **kwargs, context=context)
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 85, in request
await errors.check_response(response) # but do not parse it!
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/errors.py", line 150, in check_response
raise cls(payload, status=response.status) from e
kopf._cogs.clients.errors.APIForbiddenError: ('secrets is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "secrets" in API group "" in the namespace "kafka"', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'secrets is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "secrets" in API group "" in the namespace "kafka"', 'reason': 'Forbidden', 'details': {'kind': 'secrets'}, 'code': 403})
[2022-11-30 23:27:39,605] kopf._cogs.clients.w [DEBUG ] Stopping the watch-stream for strimzischemaregistries.v1beta1.roundtable.lsst.codes in 'kafka'.
[2022-11-30 23:27:39,606] kopf._core.reactor.o [ERROR ] Watcher for strimzischemaregistries.v1beta1.roundtable.lsst.codes#kafka has failed: ('strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', 'reason': 'Forbidden', 'details': {'group': 'roundtable.lsst.codes', 'kind': 'strimzischemaregistries'}, 'code': 403})
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/errors.py", line 148, in check_response
response.raise_for_status()
File "/opt/venv/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 1004, in raise_for_status
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://10.44.0.1:443/apis/roundtable.lsst.codes/v1beta1/namespaces/kafka/strimzischemaregistries')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/aiokits/aiotasks.py", line 108, in guard
await coro
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/reactor/queueing.py", line 175, in watcher
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 82, in infinite_watch
async for raw_event in stream:
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/watching.py", line 159, in continuous_watch
objs, resource_version = await fetching.list_objs(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/fetching.py", line 28, in list_objs
rsp = await api.get(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 111, in get
response = await request(
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/auth.py", line 45, in wrapper
return await fn(*args, **kwargs, context=context)
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/api.py", line 85, in request
await errors.check_response(response) # but do not parse it!
File "/opt/venv/lib/python3.10/site-packages/kopf/_cogs/clients/errors.py", line 150, in check_response
raise cls(payload, status=response.status) from e
kopf._cogs.clients.errors.APIForbiddenError: ('strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'strimzischemaregistries.roundtable.lsst.codes is forbidden: User "system:serviceaccount:schema-registry-operator:strimzi-registry-operator" cannot list resource "strimzischemaregistries" in API group "roundtable.lsst.codes" in the namespace "kafka"', 'reason': 'Forbidden', 'details': {'group': 'roundtable.lsst.codes', 'kind': 'strimzischemaregistries'}, 'code': 403})
Few questions on this:
I don't see the schemaRegistry pod getting created which would be listening on port 8081, only the StrimziSchemaRegistry object is created in schema - strimzi-schema-operator
How do i get access to the SchemaRegistry url so i can upload schemas to it ?
How do i resolve the permission error above ?
Do I need to create a separate service account for installinh schema-registry ?
Pls advise.
tia!
Update :
This is an existing issue with the Strimzi Schema Registry operator (https://github.com/lsst-sqre/strimzi-registry-operator/issues/79).
Essentially, the ServiceAccount is not created in the correct namespace, I re-created the ServiceAccount in namespace - strimzi-registry-operator to resolve the issue.
However, i'm facing another issue (existing issue - https://github.com/lsst-sqre/strimzi-registry-operator/issues/84), the schema registry is not getting created.
Additional Details :
Schema-Registry-operator is deployed in namespace - 'strimzi-registry-operator'
Strimzi Kafka(cluster - versa-kafka-gke) - deployed in namespace - 'kafka'
Part of the Strimzi kafka yaml, with version & listeners :
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: versa-kafka-gke #1
spec:
kafka:
version: 3.0.0
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: tls
- name: external
port: 9094
type: loadbalancer
tls: true
authentication:
type: tls
authorization:
type: simple
KafkaUser (confluent-schema-registry) & KafkaTopic (registry-schemas), deployed in namespace - 'kafka'
Confluent Schema Registry - deployed in namespace - 'kafka' (
Error :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Error Logging 73s kopf Handler 'create_registry' failed with an exception. Will retry.
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 279, in execute_handler_once
result = await invoke_handler(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 374, in invoke_handler
result = await invocation.invoke(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/invocation.py", line 139, in invoke
await asy...al/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/handlers/createregistry.py", line 131, in create_registry
bootstrap_server = get_kafka_bootstrap_server(
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/deployments.py", line 83, in get_kafka_bootstrap_server
raise kopf.Error(msg, delay=10)
AttributeError: module 'kopf' has no attribute 'Error'
Normal Logging 73s kopf Creating a new Schema Registry deployment: confluent-schema-registry with listener=tls (security protocol=tls) and strimzi-version=v1beta2 serviceType=ClusterIP image=confluentinc/cp-schema-registry:7.2.1
Normal Logging 12s kopf Creating a new Schema Registry deployment: confluent-schema-registry with listener=tls (security protocol=tls) and strimzi-version=v1beta2 serviceType=ClusterIP image=confluentinc/cp-schema-registry:7.2.1
Error Logging 12s kopf Handler 'create_registry' failed with an exception. Will retry.
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 279, in execute_handler_once
result = await invoke_handler(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 374, in invoke_handler
result = await invocation.invoke(
File "/opt/venv/lib/python3.10/site-packages/kopf/_core/actions/invocation.py", line 139, in invoke
await asy...al/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/handlers/createregistry.py", line 131, in create_registry
bootstrap_server = get_kafka_bootstrap_server(
File "/opt/venv/lib/python3.10/site-packages/strimziregistryoperator/deployments.py", line 83, in get_kafka_bootstrap_server
raise kopf.Error(msg, delay=10)
AttributeError: module 'kopf' has no attribute 'Error'

App.spec file for aws ECS blue green deployment

When i run App.spec file for aws ECS blue green deployment the error is coming : "The deployment failed because the AppSpec file that specifies the deployment configuration is missing or has an invalid configuration. Failed to parse your appspec file. Please validate your appspec format and try again later." If anyone
knows about this error. Please let me know me.
Validate your AppSpec File
File syntax Validation:
you can use a browser-based tool such as YAML lint http://www.yamllint.com or an Online YAML parser http://yaml-online-parser.appspot.com to help you check your YAML syntax. Most of the time, it would solve your problem.
File Location validation:
Make sure to name your AppSpec File as appspec.yml, To verify that you have placed your AppSpec file in the root directory of the application's source content's directory structure, run one of the following commands:
On local Linux, macOS, or Unix instances:
ls path/to/root/directory/appspec.yml
If the AppSpec file is not located there, a "No such file or directory" error is displayed.
On local Windows instances:
dir path\to\root\directory\appspec.yml
If the AppSpec file is not located there, a "File Not Found" error is displayed.
AppSpec File example for an Amazon ECS deployment
Following is an example of an AppSpec file written in YAML for deploying an Amazon ECS service.
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "arn:aws:ecs:us-east-1:111222333444:task-definition/my-task-definition-family-name:1"
LoadBalancerInfo:
ContainerName: "SampleApplicationName"
ContainerPort: 80
# Optional properties
PlatformVersion: "LATEST"
NetworkConfiguration:
AwsvpcConfiguration:
Subnets: ["subnet-1234abcd","subnet-5678abcd"]
SecurityGroups: ["sg-12345678"]
AssignPublicIp: "ENABLED"
CapacityProviderStrategy:
- Base: 1
CapacityProvider: "FARGATE_SPOT"
Weight: 2
- Base: 0
CapacityProvider: "FARGATE"
Weight: 1
Hooks:
- BeforeInstall: "LambdaFunctionToValidateBeforeInstall"
- AfterInstall: "LambdaFunctionToValidateAfterInstall"
- AfterAllowTestTraffic: "LambdaFunctionToValidateAfterTestTrafficStarts"
- BeforeAllowTraffic: "LambdaFunctionToValidateBeforeAllowingProductionTraffic"
- AfterAllowTraffic: "LambdaFunctionToValidateAfterAllowingProductionTraffic"
AppSpec File example for an Amazon EC2 deployment
See the hooks, which hooks are available and used to do what for a successful deployment.
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html
file_exists_behavior: OVERWRITE
permissions:
- object: /var/www/html
pattern: "**"
owner: root
group: www-data
mode: 644 # gives read and write permissions to the owner of the object (6), read-only permissions to the group (4), and read-only permissions to all other users (4).
# acls:
# - u:deployer:rwx
type:
- file
- object: /var/www/html
pattern: "**"
owner: root
group: www-data
mode: 755 # sets the setuid attribute (4), gives full control permissions to the owner (7), gives read and execute permissions to the group (5), and gives read and execute permissions to all other users (5).
# acls:
# - u:deployer:rwx
type:
- directory
hooks:
BeforeBlockTraffic: # run tasks on instances before they are deregistered from a load balancer.
- location: ./devops/hooks/1_BeforeBlockTraffic.sh
timeout: 300
runas: deployer # we are running codedeploy as non-root user and only the root user has the ability to have runas "su" command without password authentication
# BlockTraffic: # can't be scripted
# - location: ./devops/hooks/2_BlockTraffic.sh
# timeout: 300
# # runas: deployer
AfterBlockTraffic: # run tasks on instances after they are deregistered from a load balancer.
- location: ./devops/hooks/3_AfterBlockTraffic.sh
timeout: 300
runas: deployer
ApplicationStop: # occurs even before the application revision is downloaded
- location: ./devops/hooks/4_ApplicationStop.sh
timeout: 300
runas: root
# DownloadBundle: # can't be scripted
# - location: ./devops/hooks/5_DownloadBundle.sh
# timeout: 300
# runas: deployer
BeforeInstall:
- location: ./devops/hooks/6_BeforeInstall.sh
timeout: 300
runas: root
# Install: # can't be scripted, copies the revision files from the temporary location to the final destination folder.
# - location: ./devops/hooks/7_Install.sh
# timeout: 300
# runas: deployer
AfterInstall:
- location: ./devops/hooks/8_AfterInstall.sh
timeout: 300
# runas: deployer
ApplicationStart:
- location: ./devops/hooks/9_ApplicationStart.sh
timeout: 300
runas: root
ValidateService:
- location: ./devops/hooks/10_ValidateService.sh
timeout: 300
runas: deployer
BeforeAllowTraffic: # run tasks on instances before they are registered with a load balancer.
- location: ./devops/hooks/11_BeforeAllowTraffic.sh
timeout: 300
# runas: deployer
# AllowTraffic: # can't be scripted
# - location: ./devops/hooks/12_AllowTraffic.sh
# timeout: 300
# # runas: deployer
AfterAllowTraffic: # run tasks on instances after they are registered with a load balancer.
- location: ./devops/hooks/13_AfterAllowTraffic.sh
timeout: 300
# runas: deployer

Reading CSV file from S3 using Lambda Function-GetObject operation: Access Denied

I am now trying to read CSV file(only column name) from S3 bucket using Lambda function. I have created an S3 trigger within Lambda. Here is the sample code;
import json
import boto3
import csv
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# TODO implement
bucket = event['Records'][0]['s3']['bucket']['name']
csv_file = event['Records'][0]['s3']['object']['key']
response = s3_client.get_object(Bucket=bucket, Key=csv_file)
lines = response['Body'].read().decode('utf-8').split()
results = []
for row in csv.DictReader(lines):
results.append(row.name())
print(results)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Whenever I try to upload a new file, i get this error;
[ERROR] ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 10, in lambda_handler
response = s3_client.get_object(Bucket=bucket, Key=csv_file)
File "/var/runtime/botocore/client.py", line 386, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 705, in _make_api_call
raise error_class(parsed_response, operation_name
I added a specific role and provided necessary permissions to my S3 bucket.
Snippet of my S3 bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*",
}
}
I have provided necessary permissions to my s3 bucket. But still getting this error:
Response
{
"errorMessage": "An error occurred (AccessDenied) when calling the GetObject operation: Access Denied",
"errorType": "ClientError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 19, in lambda_handler\n response = s3_client.get_object(Bucket=bucket, Key=csv_file)\n",
" File \"/var/runtime/botocore/client.py\", line 386, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/runtime/botocore/client.py\", line 705, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
]
}
Can anyone tell me why am i getting this error?
Assuming that your S3 bucket is the one in-charge of invoking the lambda function. This will require two parties to have permissions.
1). The bucket needs to have a policy that allows it to trigger the function.
2). The lambda that will pull the CSV files from the bucket needs policy too man. In order to achieve the second part, you might want to consider pre-built policy templates available in SAM templates, this will not only make your policy definition more readable but also limits the actions that your lambda can perform on your buckets. The first sample below showcases how to grant S3 CRUD permissions
S3CsvReactor:
Type: "AWS::Serverless::Function"
Name: "csv-process-function"
Properties:
CodeUri: csv-processor-function/
Handler: app.execute
Timeout: 30 # Seconds
Runtime: Python 3.8
MemorySize: 512
Policies:
- S3CrudPolicy:
BucketName: "s3-containing-your-csv"
This example below showcases read-only implementation
S3CsvReactor:
Type: "AWS::Serverless::Function"
Name: "csv-process-function"
Properties:
CodeUri: csv-processor-function/
Handler: app.execute
Timeout: 30 # Seconds
Runtime: Python 3.8
MemorySize: 512
Policies:
- S3ReadPolicy:
BucketName: "s3-containing-your-csv"
This example below showcases write-only implementation
S3CsvReactor:
Type: "AWS::Serverless::Function"
Name: "csv-process-function"
Properties:
CodeUri: csv-processor-function/
Handler: app.execute
Timeout: 30 # Seconds
Runtime: Python 3.8
MemorySize: 512
Policies:
- S3WritePolicy:
BucketName: "s3-containing-your-csv"
Make sure the role can perform the PutObject and GetObject IAM actions on the bucket in the IAM resource specified in the IAM policy. Wrap your logic in a "try/except" block as well as a good practice to catch your errors, you might be surprised that the nature of the error from S3 that is propagating occurs earlier than expected. Furthermore, in the Lambda console select this function then click on the "Monitor" tab to be redirected to the CloudWatch Logs console, then you can read more details about the error especially if you are returning exceptions early.

Unable to run ansible playbook command - potential Authentication error (msrest)

I have recently started using Ansible to automate the deployment of a docker image to an Azure Kubernetes Service.
I have an ansible file called azure_create_aks.yml. I am running the following command on my mac ansible-playbook azure_create_aks.yml but it fails with the following (snippet from the stack trace):
msrest.exceptions.AuthenticationError: , AdalError: Get Token request returned http error: 400 and server response: Bad Request
I've tried uninstalling ansible and azure-cli and reinstalled using the following:
- brew update && brew install azure-cli
- az aks install-cli
- pip3 install ansible[azure]
I also tried uninstalling python 3 so that it would be using python 2 instead. From looking around on Stack Overflow, I think i might be encountering a dependency issue with msrestazure or possible an issue with the version of pip or python I have on my local.
After running ansible-playbook azure_create_aks.yml, I get the following:
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Create Azure Kubernetes Service] *********************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************
ok: [localhost]
TASK [Create resource group] *******************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/Users/hughej/.ansible/tmp/ansible-tmp-1569328685.354382-6386128387997/AnsiballZ_azure_rm_resourcegroup.py:18: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\n import imp\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.7/site-packages/msrestazure/azure_active_directory.py\", line 366, in set_token\n self.secret\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 179, in acquire_token_with_client_credentials\n return self._acquire_token(token_func)\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 128, in _acquire_token\n return token_func(self)\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 177, in token_func\n return token_request.get_token_with_client_credentials(client_secret)\n File \"/usr/local/lib/python3.7/site-packages/adal/token_request.py\", line 310, in get_token_with_client_credentials\n token = self._oauth_get_token(oauth_parameters)\n File \"/usr/local/lib/python3.7/site-packages/adal/token_request.py\", line 112, in _oauth_get_token\n return client.get_token(oauth_parameters)\n File \"/usr/local/lib/python3.7/site-packages/adal/oauth2_client.py\", line 289, in get_token\n raise AdalError(return_error_string, error_response)\nadal.adal_error.AdalError: Get Token request returned http error: 400 and server response: Bad Request\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/Users/hughej/.ansible/tmp/ansible-tmp-1569328685.354382-6386128387997/AnsiballZ_azure_rm_resourcegroup.py\", line 114, in <module>\n _ansiballz_main()\n File \"/Users/hughej/.ansible/tmp/ansible-tmp-1569328685.354382-6386128387997/AnsiballZ_azure_rm_resourcegroup.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/Users/hughej/.ansible/tmp/ansible-tmp-1569328685.354382-6386128387997/AnsiballZ_azure_rm_resourcegroup.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n File \"/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/imp.py\", line 169, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 630, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n File \"/var/folders/2t/30gk2pfx5_n08tfd45g3v674b8c1y8/T/ansible_azure_rm_resourcegroup_payload_6hqj1_fs/__main__.py\", line 266, in <module>\n File \"/var/folders/2t/30gk2pfx5_n08tfd45g3v674b8c1y8/T/ansible_azure_rm_resourcegroup_payload_6hqj1_fs/__main__.py\", line 262, in main\n File \"/var/folders/2t/30gk2pfx5_n08tfd45g3v674b8c1y8/T/ansible_azure_rm_resourcegroup_payload_6hqj1_fs/__main__.py\", line 144, in __init__\n File \"/var/folders/2t/30gk2pfx5_n08tfd45g3v674b8c1y8/T/ansible_azure_rm_resourcegroup_payload_6hqj1_fs/ansible_azure_rm_resourcegroup_payload.zip/ansible/module_utils/azure_rm_common.py\", line 318, in __init__\n File \"/var/folders/2t/30gk2pfx5_n08tfd45g3v674b8c1y8/T/ansible_azure_rm_resourcegroup_payload_6hqj1_fs/ansible_azure_rm_resourcegroup_payload.zip/ansible/module_utils/azure_rm_common.py\", line 1095, in __init__\n File \"/usr/local/lib/python3.7/site-packages/msrestazure/azure_active_directory.py\", line 354, in __init__\n self.set_token()\n File \"/usr/local/lib/python3.7/site-packages/msrestazure/azure_active_directory.py\", line 370, in set_token\n raise_with_traceback(AuthenticationError, \"\", err)\n File \"/usr/local/lib/python3.7/site-packages/msrest/exceptions.py\", line 54, in raise_with_traceback\n raise error.with_traceback(exc_traceback)\n File \"/usr/local/lib/python3.7/site-packages/msrestazure/azure_active_directory.py\", line 366, in set_token\n self.secret\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 179, in acquire_token_with_client_credentials\n return self._acquire_token(token_func)\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 128, in _acquire_token\n return token_func(self)\n File \"/usr/local/lib/python3.7/site-packages/adal/authentication_context.py\", line 177, in token_func\n return token_request.get_token_with_client_credentials(client_secret)\n File \"/usr/local/lib/python3.7/site-packages/adal/token_request.py\", line 310, in get_token_with_client_credentials\n token = self._oauth_get_token(oauth_parameters)\n File \"/usr/local/lib/python3.7/site-packages/adal/token_request.py\", line 112, in _oauth_get_token\n return client.get_token(oauth_parameters)\n File \"/usr/local/lib/python3.7/site-packages/adal/oauth2_client.py\", line 289, in get_token\n raise AdalError(return_error_string, error_response)\nmsrest.exceptions.AuthenticationError: , AdalError: Get Token request returned http error: 400 and server response: Bad Request\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
I'm expected the ansible playbook command to run and deploy to Azure. However this authentication error is stopping the process.
By the way, here's my ansible playbook file(sanitised):
- name: Create Azure Kubernetes Service
hosts: localhost
connection: local
vars:
resource_group: pipeline-in-a-box
location: uksouth
aks_name: pipeline-in-a-box-cluster
username: "devOpsBot"
ssh_key: "My public SSH key"
client_id: "service principle id"
client_secret: "service principle password"
kubernetes_version: "1.14.6"
tasks:
- name: Create resource group
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}"
- name: Create a managed Azure Container Services (AKS) cluster
azure_rm_aks:
name: "{{ aks_name }}"
location: "{{ location }}"
resource_group: "{{ resource_group }}"
dns_prefix: "{{ aks_name }}"
kubernetes_version: "{{ kubernetes_version }}"
linux_profile:
admin_username: "{{ username }}"
ssh_key: "{{ ssh_key }}"
service_principal:
client_id: "{{ client_id }}"
client_secret: "{{ client_secret }}"
agent_pool_profiles:
- name: default
count: 2
vm_size: Standard_D2_v2
tags:
Environment: Test
- name: Create Azure Storage Account
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: piabstorage
type: Standard_RAGRS
tags:
testing: testing
delete: on-exit
- name: Create managed disk
azure_rm_manageddisk:
name: piabdisk
location: uksouth
resource_group: "{{ resource_group }}"
disk_size_gb: 1
- name: Create an azure container registry
azure_rm_containerregistry:
name: piabregistry
location: "{{ location }}"
resource_group: "{{ resource_group }}"
admin_user_enabled: True
sku: Basic
register: acr_result
- name: Push docker image to comtainer registry
docker_image:
name: atlassian/confluence-server
repository: piabregistry.azurecr.io
push: yes
source: pull
- name: Create Azure Container Instance
azure_rm_containerinstance:
resource_group: "{{ resource_group }}"
name: piabcontainer
ip_address: public
ports:
- "8090"
- "8091"
registry_login_server: piabregistry.azurecr.io
registry_username: piabregistry
registry_password: "{{ acr_result.credentials.password }}"
containers:
- name: confluence-server
ports:
- "8090"
- "8091"
image: atlassian/confluence-server
- name: Get details of the AKS
azure_rm_aks_facts:
name: aksfacts
resource_group: "{{ resource_group }}"
show_kubeconfig: user
- name: Show AKS cluster detail
debug:
var: output.aks[0]
```

AWS: IAM permission discrepancies

I am provisioning an ECS cluster using this template as provided by AWS.
I want to also add a file from an s3 bucket, but when adding the following
files:
"/home/ec2-user/.ssh/authorized_keys":
mode: "000600"
owner: ec2-user
group: ec2-user
source: "https://s3-eu-west-1.amazonaws.com/mybucket/myfile"
the provisioning fails with this error in /var/log/cfn-init.log
[root#ip-10-17-19-56 ~]# tail -f /var/log/cfn-init.log
File "/usr/lib/python2.7/dist-packages/cfnbootstrap/construction.py", line 251, in build
changes['files'] = FileTool().apply(self._config.files, self._auth_config)
File "/usr/lib/python2.7/dist-packages/cfnbootstrap/file_tool.py", line 138, in apply
self._write_file(f, attribs, auth_config)
File "/usr/lib/python2.7/dist-packages/cfnbootstrap/file_tool.py", line 225, in _write_file
raise ToolError("Failed to retrieve %s: %s" % (source, e.strerror))
ToolError: Failed to retrieve https://s3-eu-west-1.amazonaws.com/mybucket/myfile: HTTP Error 403 : <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>C6CDAC18E57345BF</RequestId><HostId>VFCrqxtbAsTeFrGxp/nzgBqJdwC7IsS3phjvPq/YzhUk8zuRhemquovq3Plc8aqFC73ki78tK+U=</HostId></Error>
However from within the instance (without the above section) the following command succeeds!
aws s3 cp s3://mybucket/myfile .
You need to use AWS::CloudFormation::Authentication resource to specify authentication credentials for files or sources that you specify with the AWS::CloudFormation::Init resource.
Example:
Metadata:
AWS::CloudFormation::Init:
...
AWS::CloudFormation::Authentication:
S3AccessCreds:
type: "S3"
buckets:
- "mybucket"
roleName:
Ref: "myRole"