I am trying the async/await approach in AWS lambda function with node v8.11.3 npm v5.10.0. When I run it gives me the following response:
{
"errorMessage": "Unexpected token function",
"errorType": "SyntaxError",
"stackTrace": [
" ^^^^^^^^",
"SyntaxError: Unexpected token function",
"createScript (vm.js:56:10)",
"Object.runInThisContext (vm.js:97:10)",
"Module._compile (module.js:542:28)",
"Object.Module._extensions..js (module.js:579:10)",
"Module.load (module.js:487:32)",
"tryModuleLoad (module.js:446:12)",
"Function.Module._load (module.js:438:3)",
"Module.require (module.js:497:17)",
"require (internal/module.js:20:19)"
]
}
The lambda function is:
const fetch = require('node-fetch')
exports.handler = async function(event,context)
{
console.log(event);
let img = await
fetch(`https://catappapi.herokuapp.com/users/${event.userId}`);
let parseddata = await img.json()
console.log(parseddata.imageUrl);
}
How to solve this issue?
You are getting this error because you are running an older version of nodejs in cloud9 environment process (can verify that by console.log(process.version) which would be different than node --version). Follow these steps to update the process node in your Cloud9 environment:
nvm install 11
nvm use 11
nvm alias default v11
Related
I have this python code inside Lambda:
#This script will run as a Lambda function on AWS.
import time, json
cmdStatus = "Failed"
message = ""
statusCode = 200
def lambda_handler(event, context):
time.sleep(2)
if(cmdStatus=="Failed"):
message = "Command execution failed"
statusCode = 400
elif(cmdStatus=="Success"):
message = "The script execution is successful"
statusCode = 200
else:
message = "The cmd status is: " + cmdStatus
statusCode = 500
return {
'statusCode': statusCode,
'body': json.dumps(message)
}
and I am invoking this Lambda from Azure DevOps Build Pipeline - AWS Lambda Invoke Function.
As you can see in the above code - have intentionally put that cmdStatus to Failed to make that Lambda fail but when executed from Azure DevOps Build Pipeline - the task succeeds. Strange.
How can I make the pipeline to fail in this case? Please help.
Thanks
I have been working with a similar issue myself and it looks like a bug in the task itself. It was reported in 2019 and nothing happened since so I wouldn't hold out much hope.
https://github.com/aws/aws-toolkit-azure-devops/issues/175
My workaround to this issue was to instead use the AWS CLI task with
Command: lambda
Subcommand: invoke
Options and Parameters: --function-name {nameOfYourFunction} response.json
Followed immediately by a bash task with an inline bash script
cat response.json
if grep -q "errorType" "response.json"; then
echo "An error was found"
exit 1
fi
echo "No error was found"
I am using fluent-ffmpeg nodejs package to run ffmpeg for audio conversion on AWS Lambda. I am using this FFmpeg layer for lambda.
Here is my code
const bitrate64 = ffmpeg("file.mp3").audioBitrate('64k');
bitrate64.outputOptions([
'-preset slow',
'-g 48',
"-map", "0:0",
'-hls_time 6',
'-master_pl_name master.m3u8',
'-hls_segment_filename 64k/fileSequence%d.ts'
])
.output('./64k/prog_index.m3u8')
.on('progress', function(progress) {
console.log('Processing 64k bitrate: ' + progress.percent + '% done')
})
.on('end', function(err, stdout, stderr) {
console.log('Finished processing 64k bitrate!')
})
.run()
after running it via AWS lambda I get following error message
ERROR Uncaught Exception
{
"errorType": "Error",
"errorMessage": "ffmpeg exited with code 1: Conversion failed!\n",
"stack": [
"Error: ffmpeg exited with code 1: Conversion failed!",
"",
" at ChildProcess.<anonymous> (/var/task/node_modules/fluent-ffmpeg/lib/processor.js:182:22)",
" at ChildProcess.emit (events.js:198:13)",
" at ChildProcess.EventEmitter.emit (domain.js:448:20)",
" at Process.ChildProcess._handle.onexit (internal/child_process.js:248:12)"
]
}
I don't get any more info so I am not sure what's going on. Can anyone tell me what's wrong here and how can I enable more detailed logs?
Added on error callback to get a detailed error and found that there are permissions issue on lambda
.on('error', function(err, stdout, stderr) {
if (err) {
console.log(err.message);
console.log("stdout:\n" + stdout);
console.log("stderr:\n" + stderr);
reject("Error");
}
})
I built a native java AWS Lambda function using Graal and Micronaut as explained here
After deploying it to AWS Lambda (custom runtime), I can't successfully execute it.
The error that AWS shows is:
{
"errorType": "Runtime.ExitError",
"errorMessage": "RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Error: Runtime exited with error: exit status 1"
}
The AWS log output is:
START RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Version: $LATEST
01:13:08.015 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [ec2, cloud, function]
Error executing function (Use -x for more information): Error decoding JSON stream for type [request]: No content to map due to end-of-input
at [Source: (BufferedInputStream); line: 1, column: 0]
END RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2
REPORT RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Duration: 698.31 ms Billed Duration: 700 ms Memory Size: 512 MB Max Memory Used: 54 MB
RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Error: Runtime exited with error: exit status 1
Runtime.ExitError
But when I test it locally using
echo '{"value":"testing"}' | ./server
I got
01:35:56.675 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [function]
{"value":"New value: testing"}
The function code is:
#FunctionBean("user-data-function")
public class UserDataFunction implements Function<UserDataRequest, UserData> {
private static final Logger LOG = LoggerFactory.getLogger(UserDataFunction.class);
private final UserDataService userDataService;
public UserDataFunction(UserDataService userDataService) {
this.userDataService = userDataService;
}
#Override
public UserData apply(UserDataRequest request) {
if (LOG.isDebugEnabled()) {
LOG.debug("Request: {}", request.getValue());
}
return userDataService.get(request.getValue());
}
}
And the UserDataService is:
#Singleton
public class UserDataService {
public UserData get(String value) {
UserData userData = new UserData();
userData.setValue("New value: " + value);
return userData;
}
}
To test it on AWS console, I configured the following test event:
{ "value": "aws lambda test" }
PS.: I uploaded to AWS Lambda a zip file that contains the "server" and the "bootstrap" file to allow the "custom runtime" as explained before.
What I'm doing wrong?
Thanks in advance.
Tiago Peixoto.
EDIT: added the lambda test event used on AWS console.
Ok, I figured it out. I just changed the bootstrap file from this
#!/bin/sh
set -euo pipefail
./server
to this
#!/bin/sh
set -euo pipefail
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Execute the handler function from the script
RESPONSE=$(echo "$EVENT_DATA" | ./server)
# Send the response
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done
as explained here
I am trying to read data from elasticsearch from pyspark. I was using the elasticsearch-hadoop api in Spark. The es cluster sits on aws emr, which requires credential to sign in. My script is as below:
from pyspark import SparkContext, SparkConf sc.stop()
conf = SparkConf().setAppName("ESTest") sc = SparkContext(conf=conf)
es_read_conf = { "es.host" : "vhost", "es.nodes" : "node", "es.port" : "443",
"es.query": '{ "query": { "match_all": {} } }',
"es.input.json": "true", "es.net.https.auth.user": "aws_access_key",
"es.net.https.auth.pass": "aws_secret_key", "es.net.ssl": "true",
"es.resource" : "index/type", "es.nodes.wan.only": "true"
}
es_rdd = sc.newAPIHadoopRDD( inputFormatClass="org.elasticsearch.hadoop.mr.EsInputFormat",
keyClass="org.apache.hadoop.io.NullWritable",
valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=es_read_conf)
Pyspark keeps throwing error:
py4j.protocol.Py4JJavaError: An error occurred while calling
z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [HEAD] on
[index] failed; servernode:443] returned [403|Forbidden:]
I checked everything which all made sense except for the user and pass entries, would aws access key and secret key work here? We don't want to use the console user and password here for security purpose. Is there a different way to do the same thing?
Could anyone please help me in figure out why do I get below exception? All I'm trying to read some data from local file in my spark program and writing into S3. I have correct secret key and access key specified like this -
Do you think it's related to version mismatch of some library?
SparkConf conf = new SparkConf();
// add more spark related properties
AWSCredentials credentials = DefaultAWSCredentialsProviderChain.getInstance().getCredentials();
conf.set("spark.hadoop.fs.s3a.access.key", credentials.getAWSAccessKeyId());
conf.set("spark.hadoop.fs.s3a.secret.key", credentials.getAWSSecretKey());
The java code is plain vanilla -
protected void process() throws JobException {
JavaRDD<String> linesRDD = _sparkContext.textFile(_jArgs.getFileLocation());
linesRDD.saveAsTextFile("s3a://my.bucket/" + Math.random() + "final.txt");
This is my code and gradle.
Gradle
ext.libs = [
aws: [
lambda: 'com.amazonaws:aws-lambda-java-core:1.2.0',
// The AWS SDK will dynamically import the X-Ray SDK to emit subsegments for downstream calls made by your
// function
//recorderCore: 'com.amazonaws:aws-xray-recorder-sdk-core:1.1.2',
//recorderCoreAwsSdk: 'com.amazonaws:aws-xray-recorder-sdk-aws-sdk:1.1.2',
//recorderCoreAwsSdkInstrumentor: 'com.amazonaws:aws-xray-recorder-sdk-aws-sdk-instrumentor:1.1.2',
// https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk
javaSDK: 'com.amazonaws:aws-java-sdk:1.11.311',
recorderSDK: 'com.amazonaws:aws-java-sdk-dynamodb:1.11.311',
// https://mvnrepository.com/artifact/com.amazonaws/aws-lambda-java-events
lambdaEvents: 'com.amazonaws:aws-lambda-java-events:2.0.2',
snsSDK: 'com.amazonaws:aws-java-sdk-sns:1.11.311',
// https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-emr
emr :'com.amazonaws:aws-java-sdk-emr:1.11.311'
],
//jodaTime: 'joda-time:joda-time:2.7',
//guava : 'com.google.guava:guava:18.0',
jCommander : 'com.beust:jcommander:1.71',
//jackson: 'com.fasterxml.jackson.module:jackson-module-scala_2.11:2.8.8',
jackson: 'com.fasterxml.jackson.core:jackson-databind:2.8.0',
apacheCommons: [
lang3: "org.apache.commons:commons-lang3:3.3.2",
],
spark: [
core: 'org.apache.spark:spark-core_2.11:2.3.0',
hadoopAws: 'org.apache.hadoop:hadoop-aws:2.8.1',
//hadoopClient:'org.apache.hadoop:hadoop-client:2.8.1',
//hadoopCommon:'org.apache.hadoop:hadoop-common:2.8.1',
jackson: 'com.fasterxml.jackson.module:jackson-module-scala_2.11:2.8.8'
],
Exception
2018-04-10 22:14:22.270 | ERROR | | | |c.f.d.p.s.SparkJobEntry-46
Exception found in job for file type : EMAIL
java.nio.file.AccessDeniedException: s3a://my.bucket/0.253592564392344final.txt: getFileStatus on
s3a://my.bucket/0.253592564392344final.txt:
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service:
Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID:
62622F7F27793DBA; S3 Extended Request ID: BHCZT6BSUP39CdFOLz0uxkJGPH1tPsChYl40a32bYglLImC6PQo+LFtBClnWLWbtArV/z1SOt68=), S3 Extended Request ID: BHCZT6BSUP39CdFOLz0uxkJGPH1tPsChYl40a32bYglLImC6PQo+LFtBClnWLWbtArV/z1SOt68=
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:158) ~[hadoop-aws-2.8.1.jar:na]
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:101) ~[hadoop-aws-2.8.1.jar:na]
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1568) ~[hadoop-aws-2.8.1.jar:na]
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:117) ~[hadoop-aws-2.8.1.jar:na]
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1436) ~[hadoop-common-2.8.1.jar:na]
at org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:2040) ~[hadoop-aws-2.8.1.jar:na]
at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:131) ~[hadoop-mapreduce-client-core-2.6.5.jar:na]
at org.apache.spark.internal.io.HadoopMapRedWriteConfigUtil.assertConf(SparkHadoopWriter.scala:283) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:71) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1096) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) ~[spark-core_2.11-2.3.0.jar:2.3.0]
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) ~[spark-core_2.11-2.3.0.jar:2.3.0]
Once you are playing with Hadoop Configuration classes, you need to strip out the spark.hadoop prefix, so just use fs.s3a.access.key, etc.
All the options are defined in the class org.apache.hadoop.fs.s3a.Constants: if you reference them you'll avoid typos too.
One thing to consider is all the source for spark and hadoop is public: there's nothing to stop you taking that stack trace, setting some breakpoints and trying to run this in your IDE. It's what we normally do ourselves when things get bad.