Permission issue with AWS lambda layer - amazon-web-services

I am trying to add javaagent to AWS lambda function. I created a layer and uploaded a zip file which has jar file. When I test the function I get
Error opening zip file or JAR manifest missing : /opt/xxxxx.jar
But I can clearly see inside
-rwxrwxrwx 1 root root 1219876 Nov 30 05:30 xxxxx.jar
The only issue i see is it is with root permission. How do i upload so that it has current user permissions ?

It works when i select Runtime as JAVA8(Corretto) or JAVA11(Corretto). Some issue with JAVA8

Related

CodeDeploy pipeline not finding AppSpec.yml - but is clearly available

I've had this running months ago, so I know it works, but have created a new EC2 instance to deploy my code and stuck at the first hurdle.
My Deployment Details runs as follows:
Application Stop - succeeded
Download Bundle - succeeded
BeforeInstall - Failed
Upon looking at the failed event, I get:
The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path "appspec.yml". The revision was unpacked to directory "C:\ProgramData/Amazon/CodeDeploy/57f7ec1b-0452-444e-840c-4deb4566e82d/d-WH9HTZAW0/deployment-archive", and the AppSpec file was expected but not found at path "C:\ProgramData/Amazon/CodeDeploy/57f7ec1b-0452-444e-840c-4deb4566e82d/d-WH9HTZAW0/deployment-archive/appspec.yml". Consult the AWS CodeDeploy Appspec documentation for more information at http://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html
Thing is, if I jump onto my EC2 and copy and paste the full path, sure enough I see the YML file, along with the files that were in a ZIP file within my S3 bucket, so they've been successfully sent to the EC2 and unzipped.
So I'm sure it's not a permissions things, the connection is being clearly made, and the S3 Bucket, CodeDeploy and my EC2 are all happy.
I read various posts on StackOverflow about changing the AppSpec.yml file to "appspec.yml", "AppSpec.yaml", "appspec.yaml", and still nothing works.
Anything obvious to try out?
OK, after a few days back and forth, the solution was incredibly annoying (and embarrassing)...
On my EC2 instance, the "File Name Extensions" was unticked, so my AppSpec.yml was actually AppSpec.yml.txt
IF anyone else has a similar issue, do check this first!!
How are you zipping the file. A lot of times users end up "double-zipping". To check if you unzip the .zip file does it gives you the files or the folder?
When we zip a folder on Windows, it basically creates a folder inside the zip folder and thus, CodeDeploy agent cannot read it. So to zip the artifact, please select all the files and then right click to zip it on the same location. This would avoid creating a new folder inside the zip.

aws code-deploy error while redeploying site

I am getting following error while re-deploying the site using code-deploy and code pipeline
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.
The first deployment works without any issue, however if make a small change within index.html file and click on release change, my source stage gets succeeded but it fails on deploy stage, and if i put the original file back in S3 i get the above error. Version is enabled on S3, code agent is also running on windows machine
Finally got the exact error message from the code deploy log.
Can anyone help as to why i dont see this error with original deployment and what changes when i only make small change in index.html and dont change anything else and save it with the same name, it fails
2019-03-29T16:01:55 ERROR [codedeploy-agent(3728)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Error during perform: RuntimeError - The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path "appspec.yml". The revision was unpacked to directory "C:\ProgramData/Amazon/CodeDeploy/7f6993e8-a33a-41c4-a7c5-861f5c8b61d9/d-SI7UK8P1Z/deployment-archive", and the AppSpec file was expected but not found at path "C:\ProgramData/Amazon/CodeDeploy/7f6993e8-a33a-41c4-a7c5-861f5c8b61d9/d-SI7UK8P1Z/deployment-archive/appspec.yml". Consult the AWS CodeDeploy Appspec documentation for more information at http://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html - C:/Windows/TEMP/ocr5060.tmp/src/opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:223:in `parse_app_spec'
my appspec.yml file is in the root directory, nothing is changed from original file
This issue is resolved, when i was zipping the folder, i had another folder within which i had appspec.yml which was causing the issue e.g test/appspec.yml
when i place appspec.yml in root dir this fixed the issue

Lambda function throws class not found exception when deployed with Jenkins generated zip file

I'm working on AWS Lambda function .I deploy it by uploading a zip file and source code (project) written in Java 8.
project is built using gradle. upon successful build, it generates the deployment zip.
this works perfectly fine when I deploy the locally generated zip in Lambda function.
Working scenario:
Zip generated through gradle build locally in workspace -> copied to AWS S3
location -> specify the s3 zip path in Lambda upload/specify URL path field.
but when I generate the gradle build from jenkins , the zip which is generated is not working in the lambda function. it throws "class not found exception"
Exception scenario:
Zip generated through gradle in Jenkins -> copied to AWS S3 location ->
specify the s3 zip path in Lambda upload/specify URL path field.
Class not found: com.sample.HelloWorld: java.lang.ClassNotFoundException
java.lang.ClassNotFoundException: com.sample.HelloWorld
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
I suspected this could be the issue with file permissions of the content inside the zip file. i verfied this by comparing both the zip contents in a linux
environment. I could see that files from the zip generated from jenkins lacks some permissions hence i handled permissons provision for the zip contents in
my gradle build code.
task zip(type: Zip) {
archiveName 'lambda-project.zip'
fileMode 0777
from sourceSets.main.output.files.each { zipTree(it) }
from (configurations.runtime) {
into 'lib'
}
}
But still I'm getting the same error. I can see the file contents now have full permissions but still getting the same error.
Note:
Tried to make the deployment package as jar and tested. still getting same error.
I have configured the lambda handler configuration correctly. example: class name is "HelloWorld.java" and package name is com.sample then
my lambda handler configuration is com.sample.HelloWorld. I'm pretty confident about this point because with the same configuration
it works fine when zip generated locally
I have compared the zip contents (locally generated and jenkins generated ) could not see any difference in them
The directories inside the zip files were lacking permissions. I have tried by providing file permissions earlier but it worked after providing permissions for directories in gradle build.
dirMode 0777
I would recommend using serverless framework for lambda deployment, serverless framework help us to deploy lambda functions without much hassle. But if you want to setup CI, CD, monitoring and logging then you can refer to the book below.

GCloud Error: Source code size exceeds the limit

I'm doing the tutorial of basic fulfillment and conversation setup of api.ai tutorial to make a chat bot, and when I try to deploy the function with the command:
gcloud beta functions deploy --stage-bucket venky-bb7c4.appspot.com --trigger-http
(where 'venky-bb7c4.appspot.com' is the bucket_name)
It return the following error message:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Source code size exceeds the limit
I've searched but not found any answer, I don't know where is the error.
this is the JS file that appear in the tutorial:
/
HTTP Cloud Function.
#param {Object} req Cloud Function request context.
#param {Object} res Cloud Function response context.
*/
exports.helloHttp = function helloHttp (req, res) {
response = "This is a sample response from your webhook!" //Default response from the webhook to show it's working
res.setHeader('Content-Type', 'application/json'); //Requires application/json MIME type
res.send(JSON.stringify({ "speech": response, "displayText": response
//"speech" is the spoken version of the response, "displayText" is the visual version
}));
};
Neither of these worked for me. The way I was able to fix this was to make sure I was running the deploy from my project directory (the directory containing index.js)
The command creates zip with whole content of your current directory (except node_modules subdirectory) not just the JS file (this is because your function may use other resources).
The error you see is because size of (uncompressed) files in the directory is bigger than 512MB.
The easiest way to solve this is by moving the .js file to its own directory and deploying from there (you can use --local-path to point to directory containing the source file if you want your working directory to be different from directory with function source).
I tried with source option or deploying from the index.js folder and still a different problem exists.
This error usually happens if the code that is being uploaded is large. In my tests I found more than 100MB lead to the mentioned error.
However,
To resolve this there are two solutions.
Update .gcloudignore to ignore the folders which aren't required for your function
Still if option 1 doesn't resolve, you need to create a bucket in storage and mention it with --stage-bucket option.
Create a new bucket for deployment (one time)
gsutil mb my-cloud-functions-deployment-bucket
The bucket you created needs to be unique else it throws already created
Deploy
gcloud functions deploy subscribers-firestoreDatabaseChange
--trigger-topic firestore-database-change
--region us-central1
--runtime nodejs10
--update-env-vars "REDIS_HOST=10.128.0.2"
--stage-bucket my-cloud-functions-deployment-bucket
I had similar problems while deploying cloud functions. What is working for me was specifying the js files source folder.
gcloud functions deploy functionName --trigger-http **--source path_to_project_root_folder**
Also be sure to include all unnecessary folders in .gcloudignore.
Ensure the package folder has a .gitignore file (excluding node_modules).
The most recent version of gcloud requires it in order to not load node_modules. My code size went from 119MB to 17Kb.
Once I've added the .gitignore file, the log printed as well
created .gcloudignore file. See `gcloud topic gcloudignore` for details.

AWS codedeploy deployment throwing "[stderr] Could not open input file" while trying to invoke a php file from the sh file at afterInstall step

I have the following defined in the appspec file -
hooks:
AfterInstall:
- location: afterInstall.sh
Following is the content of afterInstall.sh (I am trying to invoke a php file from the sh file) -
php afterInstall.php
Both the files afterInstall.sh and afterInstall.php are at the same level (outermost level) in the zip archive that I am uploading to S3 -
appspec.yml
afterInstall.sh
afterInstall.php
I am getting the following error -
Error Code ScriptFailed
Script Name afterInstall.sh
Message Script at specified location: afterInstall.sh failed with exit code 1
Log Tail LifecycleEvent - AfterInstall
Script - afterInstall.sh
[stderr]Could not open input file: afterInstall.php
I also tried by adding the following to the permissions section of the apppsec file -
permissions:
- object: .
pattern: "**"
owner: sandeepan
group: sandeepan
mode: 777
type:
- file
Note - I have the login credentials of the deployment instances using the sandeepan user.
I am a bit confused with what exactly the permissions section does. From http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref-permissions.html,
The permissions section specifies how special permissions, if any,
should be applied to the files and directories/folders in the files
section after they are copied to the instance.
I also tried by specifying owner/group as root/root and runas: root against the afterInstall hook, but still getting the same error.
Update
I also tried by specifying the afterInstall.php file in the files section, and ensuring its permission and ownerships are correct -
- source: afterInstall.php
destination: /var/cake_1.2.0.6311-beta
At /var/cake_1.2.0.6311-beta -
-rwxrwxr-x 1 sandeepan sandeepan 26 Aug 1 08:55 afterInstall.php
I have no other clue what else should be done to fix this.
Note - I am able to deploy successfully if I do not call a php file from the afterInstall.sh
The root cause of the error is that the php file reference is incorrect. Your script assumes that the current working directory is the the destination folder, or the deployment archive folder.
This is a reasonable assumption, however neither of these is correct. On my Ubuntu server, the current working directory of the CodeDeploy shell invocation is actually /opt/codedeploy-agent. This explains why you get the "Could not open input file" error.
Since you are in the afterInstall lifecycle hook, all your files already exist in the final destination. To solve the problem, use the path specified in the destination: directive in your afterInstall.sh:
#!/bin/bash
php /var/cake_1.2.0.6311-beta/afterInstall.php
This will allow php to locate the correct file, and you deployment will run successfully.
Update:
If you want to run a file in the beforeInstall hook, the file must already exist on the system, and be referenced by a fixed path such as /tools.
This can be accomplished by one of the following:
Use a user-data script to download the script at instance launch time, or
'Baking' the script into the AMI image itself, and launching from that image.
In either case, the beforeInstall hook can then call the script from its fixed path, eg php /tools/beforeInstall.php.
I prefer option 1 in these cases. We maintain an S3 bucket with these type of assets, which are then maintained on S3, and downloaded to each instance at launch time. Any updates are pushed to S3, and are called for each new instance launch.
The object entry can be an directory or file, current setting "." matches to the CodeDeploy deployment archive directory not the destination where the scripts were copied to. So probably you can try to use the file destination directory as the object.
And since the CodeDeploy deployment archive directory contains all the files from customer's bundle. I'm not quite sure about your file directory, but if all the objects inside the deployment archive directory are directories, probably the type filed can be changed to directory.
The file section looks like the following structure, and this section specifies the names of files that should be copied to the instance during the deployment's Install event.
files:
- source: source-file-location
destination: destination-file-location
While the hook section looks like the following structure, The hooks section of the AppSpec file contains mappings that link deployment lifecycle event hooks to one or more scripts. If an event hook is not present, then no operation is executed for that event. This section is required only if you will be running scripts as part of the deployment.
hooks:
deployment-lifecycle-event-name
- location: script-location
timeout: timeout-in-seconds
runas: user-name