I have an issue with a Lambda function that tries to use ffmpeg as a third party on AWS. The function itself uses ffmpeg.js library which generates ffmpeg commands in it's functions, when they are called. I installed ffmpeg on my instance via SSH, and it's still giving me the same error
Command failed: ffmpeg -i "....
ffmpeg: command not found
Any advice on this? Many thanks
You need to include static build of ffmpeg inside your project directory
Download x86_64 version. As it the one used my lambda environment
Unzip the file and copy ffmpeg named file which is binary build and paste it in your project directory.
After that on the top of your code paste the following snippet:
process.env.PATH = process.env.PATH + ':/tmp/'
process.env['FFMPEG_PATH'] = '/tmp/ffmpeg';
const BIN_PATH = process.env['LAMBDA_TASK_ROOT']
rocess.env['PATH'] = process.env['PATH'] + ':' + BIN_PATH;
Now inside your exports.handler, paste the following line of code in the beginning of function call. It will look like this
exports.handler = function(event, context, callback) {
require('child_process').exec(
'cp /var/task/ffmpeg /tmp/.; chmod 755 /tmp/ffmpeg;',
function (error, stdout, stderr) {
if (error) {
console.log('Erro occured',error);
} else {
var ffmpeg = require('ffmpeg');
// Your task to be performed
}
}
)
}
I hope this helps. Don't forget to leave a thumbs up :)
Above solution is for Node.js language
I successfully can work with ffmpeg on AWS Lambda in Python:
Get static build of ffmpeg from here.
Untar with tar -zxvf ffmpeg-release-amd64-static.tar.xz
Fetch file ffmpeg (and optionally ffprobe) from folder and delete rest of files.
Put bare ffmpeg file (without the subfolder) in the same folder as your lambda code.
cd into this folder and zip with zip -r -X "../archive.zip" *
Upload zipped file to AWS Lambda and save.
In your Python code you need to set the correct filepath to the ffmpeg static build like so:
FFMPEG_STATIC = "/var/task/ffmpeg"
# now call ffmpeg with subprocess
import subprocess
subprocess.call([FFMPEG_STATIC, '-i', input_file, output_file])
I didn´t have to change any file permissions. This wouldn't have worked anyways because /var/task/ doesn't seem to be writeable.
input_file and output_file are local files in your spawned Lambda instance. I download my files from s3 to /tmp/ and do the processing with ffmpeg there. Make also sure to set sufficient memory and timeout for the Lambda (I use maximum settings for my workflow).
Related
I am attempting to make two AWS Lambda functions (written in typescript). Both of these functions share the same code for interacting with an API. In order to not have to copy the same code to two different Lambdas, I would like to move my shared code to a local module, and have both my Lambdas depend on said module.
My initial attempt at staring code between the two lambdas was to use a monorepo and lerna. My current project structure looks like this:
- lerna.json
- package.json
- packages
- api
- package.json
- lambdas
- funcA
- package.json
- func B
- package.json
lerna.json:
{
"packages": [
"packages/api",
"packages/lambdas/*"
],
"version": "1.0.0"
}
In each of my package.json for my Lambda functions, I am able to include my local api module as such:
"dependencies": {
"#local/api": "*"
}
With this, I've been able to move the common code to its own module. However, I'm now not sure how to bundle my functions to deploy to AWS Lambda. Is there a way for lerna to be able to create a bundle that can be deployed?
As cp -rL doesn't work on the mac I had to come up with something similar.
Here is a workflow that works if all of your packages belong to one scope (#org):
In the package.json of your lerna repo:
"scripts": {
"deploy": "lerna exec \"rm -rf node_modules\" && lerna bootstrap -- --production && lerna run deploy && lerna bootstrap"
}
In the package that contains your lambda function:
"scripts":{
"deploy": "npm pack && tar zxvf packagename-version.tgz && rm -rf node_modules/#org && cp -r node_modules/* package/node_modules && cd package && npm dedupe"
}
Now replace "packagename-version" and "#org" with the respective values of your project. Also add all of the dependent packages to "bundledDependencies".
After running npm run deploy in the root of your lerna mono repo you end up with a folder "package" in the package that contains your lambda function. It has all the dependencies needed to run your function. Take it from there.
I had hoped that using npm pack would allow me to utilize .npmignore files but it seems that that doesn't work. If anyone has an idea how to make it work let me know.
I have struggled with this same problem for a while now, and I was finally forced to do something about it.
I was using a little package named slice-node-modules, as found here in this similar question, which was good enough for my purposes for a while. As I have consolidated more of my projects into monorepos and begun using shared dependencies which reside as siblings rather than being externally published, I ran into shortcomings with that approach.
I've created a new tool called lerna-to-lambda which was specifically tailored to my use case. I published it publicly with minimal documentation, hopefully enough to help others in similar situations. The gist of it is that you run l2l in your bundling step, after you've installed all of your dependencies, and it copies what is needed into an output directory which is then ready to deploy to Lambda using SAM or whatever.
For example, from the README, something like this might be in your Lambda function's package.json:
"scripts": {
...
"clean": "rimraf build lambda",
"compile": "tsc -p tsconfig.build.json",
"package": "l2l -i build -o lambda",
"build": "yarn run clean && yarn run compile && yarn run package"
},
In this case, the compile step is compiling TypeScript files from a source directory into JavaScript files in the build directory. Then the package step bundles up all the code from build along with all of the Lambda's dependencies (except aws-sdk) into the directory lambda, which is what you'd deploy to AWS. If someone were using plain JavaScript rather than TypeScript, they could just copy the necessary .js files into the build directory before packaging.
I realize this question is over 2 years old, and you've probably figured out your own solutions and/or workarounds since then. But since it is still relevant to me, I assume it's still relevant to someone out there, so I am sharing.
Running lerna bootstrap will create a node_modules folder in each "package". This will include all of your lerna managed dependencies as well as external dependencies for that particular package.
From then on, your deployment of each lambda will be agnostic of the fact that you're using lerna. The deployment package will need to include the code for that specific lambda and the node_modules folder for that lambda - you can zip these and upload them manually, or use something like SAM or CloudFormation.
Edit: as you rightly point out you'll end up with symlinks in your node_modules folder which make things awkward to package up. To get around this, you could run something like this prior to packaging for deployment:
cp -rL lambdas/funcA/node_modules lambdas/funcA/packaged/node_modules
The -L will force the symlinked directories to be copied into the folder, which you can then zip.
I have used a custom script to copy the dependencies during the install process.. This will allow me to develop and deploy the application with the same code.
Project structure
In the package.json file of the lambda_a, I have the following line:
"scripts": {
"install": "node ./install_libs.js #libs/library_a"
},
#libs/library_a can be used by the lambda code using the following statement:
const library_a = require('#libs/library_a')
for SAM builds, I use the following command from the lmbdas frolder:
export SAM_BUILD=true && sam build
install_libs.js
console.log("Starting module installation")
var fs = require('fs');
var path = require('path');
var {exec} = require("child_process");
if (!fs.existsSync("node_modules")) {
fs.mkdirSync("node_modules");
}
if (!fs.existsSync("node_modules/#libs")) {
fs.mkdirSync("node_modules/#libs");
}
const sam_build = process.env.SAM_BUILD || false
libs_path = "../../"
if (sam_build) {
libs_path = "../../" + libs_path
}
process.argv.forEach(async function (val, index, array) {
if (index > 1) {
var currentLib = libs_path + val
console.log(`Building lib ${currentLib}`)
await exec(`cd ${currentLib} && npm install` , function (error, stdout, stderr){
if (error) {
console.log(`error: ${error.message}`);
return;
}
console.log(`stdout: ${stdout}`);
console.log('Importing module : ' + currentLib);
copyFolderRecursiveSync(currentLib, "node_modules/#libs")
});
}
});
function copyFolderRecursiveSync(source, target) {
var files = [];
// Check if folder needs to be created or integrated
var targetFolder = path.join(target, path.basename(source));
if (!fs.existsSync(targetFolder)) {
fs.mkdirSync(targetFolder);
}
// Copy
if (fs.lstatSync(source).isDirectory()) {
files = fs.readdirSync(source);
files.forEach(function (file) {
var curSource = path.join(source, file);
if (fs.lstatSync(curSource).isDirectory()) {
copyFolderRecursiveSync(curSource, targetFolder);
} else {
copyFileSync(curSource, targetFolder);
}
});
}
}
function copyFileSync(source, target) {
var targetFile = target;
// If target is a directory, a new file with the same name will be created
if (fs.existsSync(target)) {
if (fs.lstatSync(target).isDirectory()) {
targetFile = path.join(target, path.basename(source));
}
}
fs.writeFileSync(targetFile, fs.readFileSync(source));
}
I am trying to build a Freeplane derivation based on Freemind, see: https://github.com/razvan-panda/nixpkgs/blob/freeplane/pkgs/applications/misc/freeplane/default.nix
{ stdenv, fetchurl, jdk, jre, gradle }:
stdenv.mkDerivation rec {
name = "freeplane-${version}";
version = "1.6.13";
src = fetchurl {
url = "mirror://sourceforge/project/freeplane/freeplane%20stable/freeplane_src-${version}.tar.gz";
sha256 = "0aabn6lqh2fdgdnfjg3j1rjq0bn4d1947l6ar2fycpj3jy9g3ccp";
};
buildInputs = [ jdk gradle ];
buildPhase = "gradle dist";
installPhase = ''
mkdir -p $out/{bin,nix-support}
cp -r ../bin/dist $out/nix-support
sed -i 's/which/type -p/' $out/nix-support/dist/freeplane.sh
cat >$out/bin/freeplane <<EOF
#! /bin/sh
JAVA_HOME=${jre} $out/nix-support/dist/freeplane.sh
EOF
chmod +x $out/{bin/freeplane,nix-support/dist/freeplane.sh}
'';
meta = with stdenv.lib; {
description = "Mind-mapping software";
homepage = https://www.freeplane.org/wiki/index.php/Home;
license = licenses.gpl2Plus;
platforms = platforms.linux;
};
}
During the gradle build step it is throwing the following error:
building path(s)
‘/nix/store/9dc1x2aya5p8xj4lq9jl0xjnf08n7g6l-freeplane-1.6.13’
unpacking sources unpacking source archive
/nix/store/c0j5hgpfs0agh3xdnpx4qjy82aqkiidv-freeplane_src-1.6.13.tar.gz
source root is freeplane-1.6.13 setting SOURCE_DATE_EPOCH to timestamp
1517769626 of file freeplane-1.6.13/gitinfo.txt patching sources
configuring no configure script, doing nothing building
FAILURE: Build failed with an exception.
What went wrong: Failed to load native library 'libnative-platform.so' for Linux amd64.
Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. builder for ‘/nix/store/id4vfk3r6fd4zpyb15dq9xfghf342qaa-freeplane-1.6.13.drv’
failed with exit code 1 error: build of
‘/nix/store/id4vfk3r6fd4zpyb15dq9xfghf342qaa-freeplane-1.6.13.drv’
failed
Running gradle dist from terminal works fine. I'm guessing that maybe one of the globally installed Nix packages provides a fix to the issue and they are not visible during the build.
I searched a lot but couldn't find any working solution. For example, removing the ~/.gradle folders didn't help.
Update
To reproduce the issue just git clone https://github.com/razvan-panda/nixpkgs, checkout the freeplane branch and run nix-build -A freeplane in the root of the repository.
Link to GitHub issue
Maybe you just don't have permission for the folder/file
sudo chmod 777 yourFolderPath
you can also : sudo chmod 777 yourFolderPath/* (All folder)
Folder will not be locked,then You can use it normally
[At least I succeeded。。。]
EX:
sudo chmod 777 Ruby/
now ,that's ok
To fix this error: What went wrong: Failed to load native library 'libnative-platform.so' for Linux amd64. do the following:
Check if your Gradle cache (**~user/.gradle/**native folder exist at all or not).
Check if your Gradle cache (~user/.gradle/native folder exist and the file in question i.e. libnative-platform.so exists in that directory or not).
Check if the above folder ~user/.gradle or ~/.gradle/native or file: ~/.gradle/native/libnative-platform.so has valid permissions (should not be read-only. Running chmod -R 755 ~/.gradle is enough).
IF you don't see native folder at all or if your native folder seems corrupted, run your Gradle task ex: gradle clean build using -g or --gradle-user-home option and pass it's value.
Ex: If I ran mkdir /tmp/newG_H_Folder; gradle clean build -g /tmp/newG_H_Folder, you'll see Gradle will populate all the required folder/files (that it needs to run even before running any task or any option) are now in this new Gradle Home folder (i.e. /tmp/newG_H_Folder/.gradle directory).
From this folder, you can copy - just the native folder to your user's ~/.gradle folder (take backup of existing native folder in ~/.gradle first if you want to) if it already exists -or copy the whole .gradle folder to your ~ (home directory).
Then rerun your Gradle task and it won't error out anymore.
Gradle docs says:
https://docs.gradle.org/current/userguide/command_line_interface.html
-g, --gradle-user-home
Specifies the Gradle user home directory. The default is the .gradle directory in the user’s home directory.
I have a problem to copy file from Google cloud storage to my local machine. Below is the command that I run:
gsutil -m cp "gs://database/Cloud_SQL_Export_2017-09-19 (11:41:09)" .
The error that I get is as below:
Copying gs://database/Cloud_SQL_Export_2017-09-19 (11:41:09)...
==> NOTE: You are downloading one or more large file(s), which would
run significantly faster if you enabled sliced object downloads. This
feature is enabled by default but requires that compiled crcmod be
installed (see "gsutil help crcmod").
[Errno 22] invalid mode ('ab') or filename: u'.\Cloud_SQL_Export_2017-09-19 (11:41:09)_.gstmp'
CommandException: 1 file/object could not be transferred.
Hope that someone can help me :)
Seems like it required a third party download:
https://pypi.python.org/pypi/crcmod
But it might also need some escapes around the parenthesis and other symbols:
"gs://database/Cloud_SQL_Export_2017-09-19\ \(11\:41\:09\)" .
I am uploading the APK using python code, when I check for status after create_upload and uploading the actual file then I keep getting FAILED with android_app_aapt_debug_badging_failed. Any Idea why ?
Sorry to hear that you are running into issues with the upload.
For the error code you are facing, i am pasting the debugging steps below
Debugging Steps
During the upload validation process, AWS Device Farm parses out information from the output of an "aapt debug badging " command.
Make sure that you can run this command on your Android application successfully. In the following example, the package's name is app-debug.apk.
Copy your application package to your working directory, and then run the command:
$ aapt debug badging app-debug.apk
A valid Android application package should produce output like the following:
package: name='com.amazon.aws.adf.android.referenceapp' versionCode='1' versionName='1.0' platformBuildVersionName='5.1.1-1819727'
sdkVersion:'9'
application-label:'ReferenceApp'
application: label='ReferenceApp' icon='res/mipmap-mdpi-v4/ic_launcher.png'
application-debuggable
launchable-activity: name='com.amazon.aws.adf.android.referenceapp.Activities.MainActivity' label='ReferenceApp' icon=''
uses-feature: name='android.hardware.bluetooth'
uses-implied-feature: name='android.hardware.bluetooth' reason='requested android.permission.BLUETOOTH permission, and targetSdkVersion > 4'
main
supports-screens: 'small' 'normal' 'large' 'xlarge'
supports-any-density: 'true'
locales: '--_--'
densities: '160' '213' '240' '320' '480' '640'
I was having this exact issue, and none of the suggestions were doing any good.
The fix was to assign the file to data instead of files.
def upload_app(path):
url, arn = create_signed_upload('ANDROID_APP')
headers = {'content-type': 'application/octet-stream'}
with open(path, 'rb') as app:
requests.put(url, data=app, headers=headers)
success = wait_on_upload(arn)
return success
Hopefully this should be simple. Python environment is running fine if I open PowerShell v3 manually. I can check version and run external scripts etc. But as soon as I open powershell.exe through subprocess.Popen from a python script from another application, python simply won't run; "The term 'Python' is not recognised as the name of a cmdlet, function, script file or operable program... etc"
I've checked my environment paths repeatedly and python is running fine on the system in general.
anyone has any idea what this could be caused by?
subprocess.Popen(["powershell.exe", '-ExecutionPolicy', 'RemoteSigned', "path to PS1_script_with python command"])
My PS1 file looks like this:
cd C:\Users\David\Geeknote\geeknote-master\geeknote
python gnsync.py --path "C:\Users\David\Desktop\C4DtoEvernote", --mask "*.nfo", --notebook "Python Logs"
function Pause{Read-Host 'You have successfully synced your C4D Annotations to Evernote using gnsync.
Please press Enter to continue...' | Out-Null}
Pause{}
It seems (for whatever reason) your $PATH is not being read or honored by the process; and thus python cannot be found.
You can either:
Set up the path with $env:Path = "C:\Python27:C:\Python27\Scripts";
Setup the path using a custom console profile (ie, a .ps1 file) and passing it with -PSConsoleFile.
The simplest option, pass the full path to the Python executable in your command file C:\Python27\python.exe gsync.py ...
I would try #3, and then see if you need the other options.
Adjust the paths as appropriate - especially if you have multiple Python interpreters installed.