I'm quite new to the Amplify function world. I've been struggling to deploy my Golang function, connected to a DynamoDB stream. Even though I am able to run my lambda successfully by manually uploading a .zip I created myself after I built the binary using GOARCH=amd64 GOOS=linux go build src/index.go (I develop on a mac), when I use the Amplify CLI tools I am not able to deploy my function.
This is the folder structure of my function myfunction
+ myfunction
├── amplify.state
├── custom-policies.json
├── dist
│ └── latest-build.zip
├── function-parameters.json
├── go.mod
├── go.sum
├── parameters.json
├── src
│ ├── event.json
│ └── index.go
└── tinkusercreate-cloudformation-template.json
The problem is I can't use the amplify function build command, since looks like it creates a .zip file with my source file index.go in there (not the binary), so the lambda, regardless of the handler I set, it seems not able to run it from that source. I record errors like
fork/exec /var/task/index.go: exec format error: PathError null or
fork/exec /var/task/index: no such file or directory: PathError null
depending on the handler I set.
Is there a way to make Amplify build function work for a Golang lambda? I would like to successfully execute the command amplify function build myfunction so that I will be able to deliver the working deployment by amplify push to my target environment.
Related
I'm managing a repo which maintains common utilities, this repo has my common utils, jenkins file and all the dependencies are managed by Poetry so I've some TOML files to manage. We keep adding new utils as we go on.
.
├── Jenkinsfile
├── README.md
├── common_utils
│ ├── __init__.py
│ ├── aws.py
│ ├── sftp.py
├── pre-commit.yaml
├── poetry.lock
└── pyproject.toml
When I push my code to git, my Jenkinsfile packages it and publishes it to AWS code artifact. But everything time I push it, I've to manually update the TOML version in my local first before I push it to dev branch and then pull a new branch from dev to update the version again before pushing to master branch to update the version again otherwise I get a conflict on TOML version in code artifact.
I can't modify the Jenkins because even though it solves the conflict issue by updating the version, the version in the source code isn't modified, which means I still need to manually update all the versions manually. I'm not sure if pre-commit will help because I dont want to update the version every time I push to feature branch.
Is there a known way to handle this or is going with a release branch the only option?
I'm working in a repository with multiple smaller applications for various lambdas. I'd like to be able to run cargo test from the top level directory, but I can't seem to find a way to get this to work since the files aren't nested within a top level src directory.
├── cloudformation
├── apps
│ ├── app1
│ │ └── src
│ └── app2
│ └── src
└── otherStuff
Ideally I could run cargo test from the top level and it would dig into apps and run tests from the src directory nested within each individual app. Is there a way to accomplish this?
I am beginner Yii2 contributor. When I contribute in yiisoft/yii2 project, it is quite clear how to deploy the project and run its phpunit-tests. But I have some questions about working with extensions:
First I add an extension with composer require. Then git clone the same extension inside my home directory. After that I replace the first directory with symlink, which pointed to the second one. It is quite convenient due to I can see changes on the site, but I can't use composer anymore.
How to run the extension`s tests? They often depend on Yii2 app class, but
$ vendor/bin/phpunit vendor/yiisoft/yii2-elasticsearch/tests/
PHP Fatal error: Class 'yiiunit\extensions\elasticsearch\TestCase' not found in /var/www/yii2.test/vendor/yiisoft/yii2-elasticsearch/tests/ActiveDataProviderTest.php on line 11
$ vendor/bin/phpunit vendor/yiisoft/yii2-queue/tests/
PHP Fatal error: Class 'tests\TestCase' not found in /var/www/yii2.test/vendor/yiisoft/yii2-queue/tests/JobEventTest.php on line 22
Should I specify a config file? Or should I run these tests independently
of the framework?
So, would you please share with me the best practices about this situation?
You should run these tests outside of the framework. From extension perspective, yiisoft/yii2 is a dependency and should be installed in vendor directory inside of extension directory. So in short, you should go to extension directory and call composer install. After this you should get directory structure similar to this:
extension/
├── src/
│ └── ...
├── vendor/
│ ├── yiisoft/
│ │ ├── yii2/
│ │ └── ...
│ └── ...
├──composer.json
└── ...
Then you can run tests directly from extension directory (probably by vendor/bin/phpunit command).
I am running my Spark application on EMR, and have several println() statements. Other than the console, where do these statements get logged?
My S3 aws-logs directory structure for my cluster looks like:
node
├── i-0031cd7a536a42g1e
│ ├── applications
│ ├── bootstrap-actions
│ ├── daemons
│ ├── provision-node
│ └── setup-devices
containers/
├── application_12341331455631_0001
│ ├── container_12341331455631_0001_01_000001
You can find println's in a few places:
Resource Manager -> Your Application -> Logs -> stdout
Your S3 log directory -> containers/application_.../container_.../stdout (though this takes a few minutes to populate after the application)
SSH into the EMR, yarn logs -applicationId <Application ID> -log_files <log_file_type>
There is a very important thing that you need to consider when printing from Spark: are you running code that gets executed in the driver or is it code that runs in the executor?
For example, if you do the following, it will output in the console as you are bringing data back to the driver:
for i in your_rdd.collect():
print i
But the following will run within an executor and thus it will be written in the Spark logs:
def run_in_executor(value):
print value
your_rdd.map(lambda x: value(x))
Now going to your original question, the second case will write to the log location. Logs are usually written to the master node which is located in /mnt/var/log/hadoop/steps, but it might be better to configure logs to an s3 bucket with --log-uri. That way it will be easier to find.
I'm new to Go.
I am trying to deploy a simple web project to EB without success.
I would like to deploy a project with the following local structure to Amazon EB:
$GOPATH
├── bin
├── pkg
└── src
├── github.com
│ ├── AstralinkIO
│ │ └── api-server <-- project/repository root
│ │ ├── bin
│ │ ├── cmd <-- main package
│ │ ├── pkg
│ │ ├── static
│ │ └── vendor
But I'm not sure how to do that, when building the command, Amazon is treating api-server as the $GOPATH, and of course import paths are broken.
I read that most of the time it's best to keep all repos under the same workspace, but it makes deployment harder..
I'm using Procfile and Buildfile to customize output path, but I can't find a solution to dependencies.
What is the best way to deploy such project to EB?
Long time has past since I used Beanstalk, so I'm a bit rusty on the details. But basic idea is as follows. AWS Beanstalk support for go is a bit odd by design. It basically extracts your source files into a folder on the server, declares that folder as GOPATH and tries to build your application assuming that your main package is at the root of your GOPATH. Which is not a standard layout for go projects. So your options are:
1) Package your whole GOPATH as "source bundle" for Beanstalk. Then you should be able to write build.sh script to change GOPATH and build it your way. Then call build.sh from your Buildfile.
2) Change your main package to be a regular package (e.g. github.com/AstralinkIO/api-server/cmd). Then create an application.go file at the root of your GOPATH (yes, outside of src, while all actual packages are in src as they should be). Your application.go will become your "package main" and will only contain a main function (which will call your current Main function from github.com/AstralinkIO/api-server/cmd). Should do the trick. Though your mileage might vary.
3) A bit easier option is to use Docker-based Go Platform instead. It still builds your go application on the server with mostly same issues as above, but it's better documented and possibility to test it locally helps a lot with getting configuration and build right. It will also give you some insights into how Beanstalk builds go applications thus helping with options 1 and 2. I used this option myself until I moved to plain EC2 instances. And I still use skills gained as a result of it to build my current app releases using docker.
4) Your best option though (in my humble opinion) is to build your app yourselves and package it as a ready to run binary file. See second bullet point paragraph here
Well, which ever option you choose - good luck!