I am trying to deploy a lambda function using ansible playbook.
Lambda code
import boto3
import os`enter code here`
ecs = boto3.client('ecs')
LAMBDA_ENV = ''
if 'stack_name' in os.environ:
LAMBDA_ENV = os.environ.get['stack_name']
def task(event,context):
get_task_arn = ecs.list_tasks(
cluster = LAMBDA_ENV,
family= LAMBDA_ENV + '-Wallet-Scheduler',
desiredStatus='RUNNING'
)
#print(get_task_arn)
task = ''.join(get_task_arn['taskArns'])
print(task)
stop_task = ecs.stop_task(
cluster = LAMBDA_ENV
task = task,
reason='test'
)
The command i use to deploy the lambda function is
ansible-playbook -e stack_name=DEV playbook.yaml
How do i make sure the variable in python file LAMBDA_ENV changes to DEV,STAGE,PRD based on the environment when it gets deployed?
Ansible Playbook
- name: package python code to a zip file
shell: |
cd files/
rm allet-restart.py
zip file.zip file.py
- name: Create lambda function
lambda:
name: '{{ stack_name | lower }}-lambda-function'
state: present
zip_file: 'files/file.zip'
runtime: python2.7
role: '{{ role_arn }}'
timeout: 60
handler: file.task
with_items:
- env_vars:
stack_name: 'test'
register: wallet-restart
Deploying it from MacOS
AWS Lambda supports environment parameters and the same can be accessed from lambda code.
With this, you can avoid hardcoding parameters inside the code.
"environment_variables" is the parameter using which you can add env variables for lambda
(Ref : https://docs.ansible.com/ansible/latest/modules/lambda_module.html)
If you are using python, you can access the lambda environment variables using python os module
import os
LAMBDA_ENV = ''
if 'ENV' in os.environ:
LAMBDA_ENV = os.environ['ENV']
Hope this helps !!!
You can use ansible template module to replace all the env variables for the python code in the lamda then zip all files using shell module and then invoke the lamda.
- name: template module
template:
src:
dest:
- name: zip the templated python code insde the zip
shell: zip ...
- name: invoke lamda
lamda:
....
Related
hope you are doing well.
I wanted to check if anyone has get up and running with dbt in aws mwaa airflow.
I have tried without success this one and this python packages but fails for some reason or another (can't find the dbt path, etc).
Did anyone has managed to use MWAA (Airflow 2) and DBT without having to build a docker image and placing it somewhere?
Thank you!
I've managed to solve this by doing the following steps:
Add dbt-core==0.19.1 to your requirements.txt
Add DBT cli executable into plugins.zip
#!/usr/bin/env python3
# EASY-INSTALL-ENTRY-SCRIPT: 'dbt-core==0.19.1','console_scripts','dbt'
__requires__ = 'dbt-core==0.19.1'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('dbt-core==0.19.1', 'console_scripts', 'dbt')()
)
And from here you have two options:
Setting dbt_bin operator argument to /usr/local/airflow/plugins/dbt
Add /usr/local/airflow/plugins/ to the $PATH by following the docs
Environment variable setter example:
from airflow.plugins_manager import AirflowPlugin
import os
os.environ["PATH"] = os.getenv(
"PATH") + ":/usr/local/airflow/.local/lib/python3.7/site-packages:/usr/local/airflow/plugins/"
class EnvVarPlugin(AirflowPlugin):
name = 'env_var_plugin'
The plugins zip content:
plugins.zip
├── dbt (DBT cli executable)
└── env_var_plugin.py (environment variable setter)
Using the pypi airflow-dbt-python package has simplified the setup of dbt_ to MWAA for us, as it avoids needing to amend PATH environment variables in the plugins file. However, I've yet to have a successful dbt_ run via either airflow-dbt or airflow-dbt-python packages, as MWAA worker seems to be a read only filesystem, so as soon as dbt_ tries to compile to the target directory, the following error occurs:
File "/usr/lib64/python3.7/os.py", line 223, in makedirs
mkdir(name, mode)
OSError: [Errno 30] Read-only file system: '/usr/local/airflow/dags/dbt/target'
This is how I managed to do it:
#dag(**default_args)
def dbt_dag():
#task()
def run_dbt():
from dbt.main import handle_and_check
os.environ["DBT_TARGET_DIR"] = "/usr/local/airflow/tmp/target"
os.environ["DBT_LOG_DIR"] = "/usr/local/airflow/tmp/logs"
os.environ["DBT_PACKAGE_DIR"] = "/usr/local/airflow/tmp/packages"
succeeded = True
try:
args = ['run', '--whatever', 'bla']
results, succeeded = handle_and_check(args)
print(results, succeeded)
except SystemExit as e:
if e.code != 0:
raise e
if not succeeded:
raise Exception("DBT failed")
note that my dbt_project.yml has the following paths, this is to avoid os exception when trying to write to read only paths:
target-path: "{{ env_var('DBT_TARGET_DIR', 'target') }}" # directory which will store compiled SQL files
log-path: "{{ env_var('DBT_LOG_DIR', 'logs') }}" # directory which will store dbt logs
packages-install-path: "{{ env_var('DBT_PACKAGE_DIR', 'packages') }}" # directory which will store dbt packages
Combining the answer from #Yonatan Kiron & #Ofer Helman works for me.
I just need to fix these 3 files:
requiremnt.txt
plugins.zip
dbt_project.yml
My requirements.txt I specify the version I want, and looks like this:
airflow-dbt==0.4.0
dbt-core==1.0.1
dbt-redshift==1.0.0
Note that, as of v1.0.0, pip install dbt is no longer supported and will raise an explicit error. Since v0.13, the PyPi package named dbt was a simple "pass-through" of dbt-core. (refer https://docs.getdbt.com/dbt-cli/install/pip#install-dbt-core-only)
For my plugins.zip I add a file env_var_plugin.py that looks like this
from airflow.plugins_manager import AirflowPlugin
import os
os.environ["DBT_LOG_DIR"] = "/usr/local/airflow/tmp/logs"
os.environ["DBT_PACKAGE_DIR"] = "/usr/local/airflow/tmp/dbt_packages"
os.environ["DBT_TARGET_DIR"] = "/usr/local/airflow/tmp/target"
class EnvVarPlugin(AirflowPlugin):
name = 'env_var_plugin'
And finally I add this in my dbt_project.yml
log-path: "{{ env_var('DBT_LOG_DIR', 'logs') }}" # directory which will store dbt logs
packages-install-path: "{{ env_var('DBT_PACKAGE_DIR', 'dbt_packages') }}" # directory which will store dbt packages
target-path: "{{ env_var('DBT_TARGET_DIR', 'target') }}" # directory which will store compiled SQL files
And as stated in the airflow-dbt github, (https://github.com/gocardless/airflow-dbt#amazon-managed-workflows-for-apache-airflow-mwaa) configure the dbt task like below:
dbt_bin='/usr/local/airflow/.local/bin/dbt',
profiles_dir='/usr/local/airflow/dags/{DBT_FOLDER}/',
dir='/usr/local/airflow/dags/{DBT_FOLDER}/'
I am trying to execute a ansible playbook which uses the script module to run a custom python script.
This custom python script is importing another python script.
On execution of the playbook the ansible command fails while trying to import the util script. I am new to ansible, please help!!
helloWorld.yaml:
- hosts: all
tasks:
- name: Create a directory
script: /ansible/ems/ansible-mw-tube/modules/createdirectory.py "{{arg1}}"
createdirectory.py -- Script configured in YAML playbook
#!/bin/python
import sys
import os
from hello import HelloWorld
class CreateDir:
def create(self, dirName,HelloWorldContext):
output=HelloWorld.createFolder(HelloWorldContext,dirName)
print output
return output
def main(dirName, HelloWorldContext):
c = CreateDir()
c.create(dirName, HelloWorldContext)
if __name__ == "__main__":
HelloWorldContext = HelloWorld()
main(sys.argv[1],HelloWorldContext)
HelloWorldContext = HelloWorld()
hello.py -- util script which is imported in the main script written above
#!/bin/python
import os
import sys
class HelloWorld:
def createFolder(self, dirName):
print dirName
if not os.path.exists(dirName):
os.makedirs(dirName)
print dirName
if os.path.exists(dirName):
return "sucess"
else:
return "failure"
Ansible executable command
ansible-playbook -v -i /ansible/ems/ansible-mw-tube/inventory/helloworld_host /ansible/ems/ansible-mw-tube/playbooks/helloWorld.yml -e "arg1=/opt/logs/helloworld"
Ansible version
ansible --version
[WARNING]: log file at /opt/ansible/ansible.log is not writeable and we cannot create it, aborting
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
The script module copies the script to the remote server and executes it there using the shell command. It can't find the util script, since it doesn't transfer that file - it doesn't know that it needs to do it.
You have several options, such as use copy to move both files to the server and use shell to execute them. But since what you seem to be doing is creating a directory, the file module can do that for you with no scripts necessary.
I'm using docker-compose v2 to build my containers (Django and Nginx).
I'm wondering how to store the static and media files. At the beginning I stored them as a volume on the machine, but the machine crashed and I lost the data (or at least, I didn't know how to recover it).
I thought it's better to store it on Amazon S3, but there are no guides for that (maybe it means something :) ).
This is my docker-compose file:
I tried to add the the needed fields (name, key, secret,...) but no success so far.
Is it the right way?
Thanks!
version: '2'
services:
web:
build:
context: ./web/
dockerfile: Dockerfile
expose:
- "8000"
volumes:
- ./web:/code
- static-data:/www/static
- media-data:/www/media
env_file: devEnv
nginx:
build: ./nginx/
ports:
- "80:80"
volumes:
- static-data:/www/static
- media-data:/www/media
volumes_from:
- web
links:
- web:web
volumes:
static-data:
driver: local
media-data:
driver: s3
Here a example of how upload files to S3(for backup) from a container, but would be made in the host OS too, since you have a volume of container mounted on Host OS.
In this script I download media from S3 to a local container/server. After it, I use pynotify to watch the dir static/media, for modifications. If any change occur it upload the file to S3 using the command subprocess.Popen(upload_command.split(" ")).
I think that you can adapt this script for your problem too.
Before you test this script, you should set your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY on the environment variables of OS.
For more details S4cmd documentation.
#!-*- coding:utf-8 -*-
import pyinotify
import os
import subprocess
from single_process import single_process
# You most have set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
# in enviromnet variables
PROJECT_DIR = os.getcwd()
MEDIA_DIR = os.path.join(PROJECT_DIR, "static/media")
AWS_BUCKET_NAME = os.environ.get("AWS_BUCKET_NAME", '')
S4CMD_DOWNLOAD_MEDIA = "s4cmd get --sync-check --recursive s3://%s/static/media/ static/" % (AWS_BUCKET_NAME)
UPLOAD_FILE_TO_S3="s4cmd sync --sync-check %(absolute_file_dir)s s3://"+ AWS_BUCKET_NAME +"/%(relative_file_dir)s"
# Download all media from S3
subprocess.Popen(S4CMD_DOWNLOAD_MEDIA.split(" ")).wait()
class ModificationsHandler(pyinotify.ProcessEvent):
def process_IN_CLOSE_WRITE(self, event):
try:
dir = event.path
file_name = event.name
absolute_file_dir=os.path.join(dir, file_name)
relative_dir=dir.replace(PROJECT_DIR, "")
relative_file_dir=os.path.join(relative_dir, file_name)
if relative_file_dir.startswith("/"):
relative_file_dir = relative_file_dir[1:]
print("\nSeding file %s to S3" % absolute_file_dir)
param = {}
param.update(absolute_file_dir=absolute_file_dir)
param.update(relative_file_dir=relative_file_dir)
upload_command = UPLOAD_FILE_TO_S3 % param
print(upload_command)
subprocess.Popen(upload_command.split(" "))
except Exception as e:
# log excptions
print("Some problem:", e.message)
#single_process
def main():
handler = ModificationsHandler()
wm = pyinotify.WatchManager()
notifier = pyinotify.Notifier(wm, handler)
print("\nListening changes in: " + MEDIA_DIR)
if MEDIA_DIR:
wdd = wm.add_watch(MEDIA_DIR, pyinotify.IN_CLOSE_WRITE, auto_add=True, rec=True)
notifier.loop()
if __name__ == "__main__":
main()
I'm trying to use cloud points to build my API(at local server).
But I meet some problem now. I add the cloud endpoints code at helloworld.py , like this:
class StringMsg(messages.Message):
"""Greeting that stores a message."""
msg = messages.StringField(1)
class SringListMsg(messages.Message):
"""Collection of Greetings."""
items = messages.StringField(1, repeated=True)
#endpoints.api(name='test', version='v3', description="FIRST")
class test(remote.Service):
#endpoints.method(
StringMsg,
SringListMsg
)
# path='test',
# http_method='GET',
# name='test')
def test(self, request):
msg = request.msg
splitted = msg.split()
return SringListMsg(items=splitted)
api = endpoints.api_server([test])
According to Google's document, cloud endpoints have to be app.yaml. So, I put it at my app.yaml, like this:
application: blabla
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /_ah/spi/.*
script: helloworld_api.api
- url: /.*
script: main.app
libraries:
- name: webapp2
version: latest
- name: jinja2
version: latest
- name: pycrypto
version: "2.6"
- name: endpoints
version: 1.0
NOW, in my case I need to start three modules, so I use
dev_appserver.py app.yaml dispatch.yaml backend.yaml frontmodule.yaml
And I go to
localhost:8080/_ah/api/explorer
I did't see my API, and I get error:
Skipping dispatch.yaml rules because /_ah/api/explorer is not a dispatcher path
But I DIDN'T add /_ah/api/explorer at my dispatch.yaml.
Somebody help me please!!!
I downloaded the AWS cli and was able to successfully list objects from my bucket. But doing the same from a Python script does not work. The error is forbidden error.
How should I configure the boto to use the same default AWS credentials ( as used by AWS cli )
Thank you
import logging import urllib, subprocess, boto, boto.utils, boto.s3
logger = logging.getLogger("test") formatter =
logging.Formatter('%(asctime)s %(message)s') file_handler =
logging.FileHandler("test.log") file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler(sys.stderr)
logger.addHandler(file_handler) logger.addHandler(stream_handler)
logger.setLevel(logging.INFO)
# wait until user data is available while True:
logger.info('**************************** Test starts *******************************')
userData = boto.utils.get_instance_userdata()
if userData:
break
time.sleep(5)
bucketName = ''
deploymentDomainName = ''
if bucketName:
from boto.s3.key import Key
s3Conn = boto.connect_s3('us-east-1')
logger.info(s3Conn)
bucket = s3Conn.get_bucket('testbucket')
key.key = 'test.py'
key.get_contents_to_filename('test.py')
CLI is -->
aws s3api get-object --bucket testbucket --key test.py my.py
Is it possible to use the latest Python SDK from Amazon (Boto 3)? If so, set up your credentials as outlined here: Boto 3 Quickstart.
Also, you might check your environment variable. If they don't exist, that is okay. If they don't match those on your account, then that could be the problem as some AWS SDKs and other tools with use environment variables over the config files.
*nix:
echo $AWS_ACCESS_KEY_ID && echo $AWS_SECRET_ACCESS_KEY
Windows:
echo %AWS_ACCESS_KEY% & echo %AWS_SECRET_ACCESS_KEY%
(sorry if my windows-foo is weak)
When you use CLI by default it takes credentials from .aws/credentials file, but for running bot you will have to specify access key and secret key in your python script.
import boto
import boto.s3.connection
access_key = 'put your access key here!'
secret_key = 'put your secret key here!'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = 'bucketname.s3.amazonaws.com',
#is_secure=False, # uncomment if you are not using ssl
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)