Publish artifact to AWS CodeArtifact with sbt - amazon-web-services

I'm trying to publish artifacts to AWS CodeArtifact using sbt, but I'm having some troubles;
Given an sbt project, running the command sbt publish the package is uploaded to the repo but It remains in the Unfinished state. The AWS CodeArtifact documentation says:
Unfinished: The last attempt to publish did not complete. Currently only Maven package versions can have a status of Unfinished. This can occur when the client uploads one or more assets for a package version but does not publish a maven-metadata.xml file for the package that includes that version.
I'm using sbt at version 1.3.3, I'm not using plugins, the property publishMavenStyle is true.
I know that the sbt-maven-resolver (here the repo) solves the issue, but it seems an "unfollowed plugin", and moreover, using it I lose all logs about the publishing process, I don't trust it.
Did anyone have the same issue and have solved it somehow?

Using CodeArtifact with SBT
Setting up SBT with CodeArtifact
Publishing Packages with SBT (also avoiding the artifact being in Unfinished state.)
1. Setting up SBT with CodeArtifact
Create a CodeArtifact repository with a Maven upstream. For this example we're going to use repository maven-test in domain launchops
Open up the Connection Instructions in the console and choose mvn. We will need information from this later.
Copy the command which exports the "CODEARTIFACT_AUTH_TOKEN" environment variable from the console and run it in your environment. This will set the "CODEARTIFACT_AUTH_TOKEN" to be the password for our repository, the username is always aws.
In the build.sbt file import sbt.Credentials:
import sbt.{Credentials}
Now we need to setup the credentials. To do this we're first going to read the CODEARTIFACT_AUTH_TOKEN environment variable:
val repoPass = sys.env.get("CODEARTIFACT_AUTH_TOKEN").getOrElse("")
Next, we're going to use the previously imported sbt.Credentials to setup a new set of Credentials:
credentials += Credentials("launchops/maven-test", "launchops-123456789012.d.codeartifact.us-east-1.amazonaws.com", "aws", repoPass)
The values passed to the Credentials object are ("domain-name/repository-name", "repository hostname without protocol", "username", "password"), with username always being aws and password coming from the repoPass variable we only need to modify the first two to point to our repository.
Now we just need to instruct SBT to use our repository as a resolver. The consoles connection instructions will generate Maven settings, for example:
<repository>
<id>launchops--maven-test</id>
<url>https://launchops-123456789012.d.codeartifact.us-east-1.amazonaws.com/maven/maven-test/</url>
</repository>
We will use these values to create a resolver in our build.sbt file:
resolvers += "launchops--maven-test" at "https://launchops-123456789012.d.codeartifact.us-east-1.amazonaws.com/maven/maven-test"
The format of this is "resolvers += "ID From maven configuration in console" at "Repository URL from maven configuration in console".
To completely disable the use of public Maven repositories (Force CodeArtifact usage) you can add the following line to the build.sbt file:
externalResolvers := Resolver.combineDefaultResolvers(resolvers.value.toVector, mavenCentral = false)
After performing these setups steps you should be able to run sbt update and observe packages being downloaded through CodeArtifact.
Sample build.sbt for reference:
import sbt.{Credentials, Path}
name := "scala-test"
version := "0.3.0"
scalaVersion := "2.12.6"
organization := "com.abc.def"
val repoPass = sys.env.get("CODEARTIFACT_AUTH_TOKEN").getOrElse("")
credentials += Credentials("launchops/maven-test", "launchops-123456789012.d.codeartifact.us-east-1.amazonaws.com", "aws", repoPass)
resolvers += "launchops--maven-test" at "https://launchops-123456789012.d.codeartifact.us-east-1.amazonaws.com/maven/maven-test"
libraryDependencies ++= Seq(
"org.scalatest" %% "scalatest" % "3.0.0" % "test",
"io.nats" % "jnats" % "2.0.0",
"org.json4s" %% "json4s-native" % "3.6.0"
)
2. Publishing Packages
Apart from pulling dependencies, SBT can also be used to publish packages. To have SBT publish to CodeArtifact we first need to set it up in the build.sbt file:
Add the following to the file:
publishMavenStyle := true
publishTo := Some("launchops--maven-test" at "https://launchops-123456789012.d.codeartifact.us-east-1.amazonaws.com/maven/maven-test")
At this point, technically, running sbt publish will push the package to CodeArtifact, however it will end up in Unfinished state. We need to make use of sbt-maven-resolver plugin to help get the package in the correct format: https://github.com/sbt/sbt-maven-resolver
In the project/plugins.sbt file add the following line:
addSbtPlugin("org.scala-sbt" % "sbt-maven-resolver" % "0.1.0")
Now you can run sbt publish and have the package publish to CodeArtifact successfully. If you see an error make sure that you are using a recent version of SBT.

You can achieve the same without using the sbt-maven-resolver plugin by following shariqmaws' answer without the plugin.
The publish will result in an artifact in "unpublished" state.
Then use the aws codeartifact cli to publish it (https://docs.aws.amazon.com/codeartifact/latest/ug/maven-curl.html)

Related

How do i continue working with Amplify on a new machine?

I'm using react native for my project. On my old machine, when i ran amplify status, i had Auth, Api and Storage services listed.
I moved to my new machine, installed node, watchman, brew etc... and then navigated to my react native project and ran: react-native run-ios, and voila, my app is running. All the calls to my AWS Api, Auth and Storage are working perfectly.
Now i can make some amplify commands. Such as amplify status. I tried: amplify env add: here's what i got:
Users-MBP-2:projectname username$ amplify env add
Note: It is recommended to run this command from the root of your app directory
? Do you want to use an existing environment? Yes
? Choose the environment you would like to use: dev
Using default provider awscloudformation
✖ There was an error initializing your environment.
init failed
Error: ENOENT: no such file or directory, open '/Users/username/.aws/credentials'
at Object.openSync (fs.js:462:3)
at Proxy.readFileSync (fs.js:364:35)
at Object.readFileSync (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/util.js:95:26)
at IniLoader.parseFile (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/shared-ini/ini-loader.js:6:47)
at IniLoader.loadFrom (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/shared-ini/ini-loader.js:56:30)
at Config.region (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/node_loader.js:100:36)
at Config.set (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:507:39)
at Config.<anonymous> (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:342:12)
at Config.each (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/util.js:507:32)
at new Config (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:341:19) {
errno: -2,
syscall: 'open',
code: 'ENOENT',
path: '/Users/username/.aws/credentials'
}
Do you think credentials info needs to be brought/configured to my new machine?
When i run amplify configure project it's like doing an amplify init and building a project from scratch. I'm being asked:
? Enter a name for the project: ProjectName
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using (Use arrow keys)
angular
ember
ionic
react
❯ react-native
vue
none
etc....
I also already have a region, username and accessKey, secretAccess key etc..
I do not want to replace or ruin anything in my current backend or current project! Whats going on?
Ensure amplify-cli is installed and you're logged in with your AWS details.
npm install -g #aws-amplify/cli
amplify configure
Running amplify configure is mainly to give the cli knowledge of your AWS account so subsequent commands can have access to things.
If you get amplify: command not found errors try restarting your terminal. If still no luck, you will need to check amplify has been added to your PATH variable.
Run amplify env add , but choose an existing environment. This will let you choose the environment you created on your other machine so you can pull those settings down to your new machine.
amplify env add
? Do you want to use an existing environment? Yes
Production
Follow up with:
amplify pull
You don't need to run amplify add auth again or anything. All of that will pull down automatically after you've done the above.
You DO NOT need to do all config again, but some for sure
You have to install amplify cli npm install -g #aws-amplify/cli
use amplify pull
https://docs.amplify.aws/cli/start#amplify-pull
Follow the rest of steps -
-- provide the accessKeyId, secretAccessKey
-- region
-- select amplify project
and then rest of app related thing like IDE, directory......
I tried every solution then I found this. (in MacBook)
% sudo -i
Password:
~ root# npm install -g #aws-amplify/cli
-- Ctrl+D to exist from Root user
% amplify pull --appId xxxx --envName yyyy.
Note: To get --appId xxxx --envName yyyy
Log in to the AWS console. Choose AWS Amplify. Click your app. Go to Backend
environments. Find the backend environment you wish to pull. Click
Edit backend. See top right then click 'Local setup instructions
' ( amplify pull --appId
YOUR_APP_ID --envName YOUR_ENV_NAME )
Waiting until it request to verify your amplify.
✔ Successfully received Amplify Studio tokens.
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using react
? Source Directory Path: src
? Distribution Directory Path: build
? Build Command: npm run-script build
? Start Command: npm run-script start
✔ Synced UI components.
? Do you plan on modifying this backend? Yes
⠴ Building resource api/xxxx✅ GraphQL schema compiled successfully.
Edit your schema at ....
✔ Successfully pulled backend environment yyyy from the cloud.
✅
Successfully pulled backend environment staging from the cloud.
Run 'amplify pull' to sync future upstream changes.
% amplify pull
% npm install
% npm start
Hope this help every one!!
Happy Coding :)

AWS Codepipeline first stage

I've created a pipeline using codecommit->codebuild->codepipeline in order to build and test automatically my android app located on my github repository.
But at first stage after build step the pipeline returns this error:
I don't know the app location if it refers to .apk file because there is no .apk file into my repository.
Can anyone help me?
You need an apk file in your input artifact to the 'Test' stage (AWS Device Farm action) and then specify the apk filename in 'App - optional' field while setting up the AWS Device Farm Test action in CodePipeline.

CodeBuild trigger using custom buildstep file for specific folders

I've been calling codebuild and manually overriding the buildspec like this:
aws codebuild start-build --cli-input-json file://servicea/custom.json
and then in custom.json
{
"projectName": "myproject",
"sourceVersion": "master",
"buildspecOverride": "servicea/buildspec.yml"
}
Now I want to use bitbucket trigger (or github if bitbucket is not supported) to build the service automatically after it's being pushed to master.
I've been Googling and found this tutorial https://docs.aws.amazon.com/codebuild/latest/userguide/sample-bitbucket-pull-request.html
However, I met a roadblock where I couldn't build a specific folder with a specific buildspec.
e.g.
for servicea, the build should run if I push to master and change any files in servicea folder with servicea/buildspec.yaml as the buildspec
for serviceb, the build should run if I push to master and change any files in serviceb folder with serviceb/buildspec.yaml as the buildspec
There is a FILE_PATH filter in the trigger, however there's I couldn't find a way to set the custom buildspec.
Is there any way to achieve this?
Note:
I want to use 1 codebuild project for all of my services
Bitbucket's webhook payload doesn't have the list of files changed in them, unlike GitHub.
Workaround:
Set the "git-credential-helper" to "yes" (or true) in your buildspec. Details in https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax
You can then fetch the list of file changed for the specific commit using the call mentioned in https://community.atlassian.com/t5/Bitbucket-questions/Bitbucket-How-to-get-modified-files-of-a-commit-in-JSON-format/qaq-p/704126
You can obtain the commit from the environment variable: CODEBUILD_RESOLVED_SOURCE_VERSION and the branch from: CODEBUILD_WEBHOOK_HEAD_REF. Details in https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html

Error in AWS codepipeline when deploying ElasticBeanstalk

I've created a pipeline which does the following:
Git changes trigger next action (code build)
Codebuild initiates & builds a docker image from git source
Set latest docker container up on Elasticbeanstalk
The first 2 steps are working fine, git changes initiate a codebuild, the codebuild builds a docker image, and then tries to set it up on Elasticbeanstalk (which fails). The following error is thrown:
Invalid action configuration The action failed because either the
artifact or the Amazon S3 bucket could not be found. Name of artifact
bucket: MY_BUCKET_NAME. Verify that this bucket
exists. If it exists, check the life cycle policy, then try releasing
a change.
In my codebuild project, I've set the artifact location to MY_BUCKET_NAME & named it aws-test-artifact. Is this all I have to do?
I've tried looking around and am unable to find anything on this issue.
I had the same problem. Just changed Input artifacts from BuildArtifact to SourceArtifact in the build stage, and everything worked.
As Adam Loving commented we must add artifacts section.
Adding this section to your buildspec.yml file will make this work.
artifacts:
files:
- '**/*'
From documentation https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.artifacts.files adding '**/*' will include all files into the build target.
So I found the fix to this issue! What I had to do was goto codebuild => edit project => Show advanced settings => Artifacts packaging
From here I changed Artifacts packaging to Zip!

Deploy .war to AWS

I want to deploy war from Jenkins to Cloud.
Could you please let me know how to deploy war file from Jenkins on my local to AWS Bean Stalk ?
I tried using a Jenkins post-process plugin to copy the artifact to S3, but I get the following error:
ERROR: Failed to upload files java.io.IOException: put Destination [bucketName=https:, objectName=/s3-eu-west-1.amazonaws.com/bucketname/test.war]:
com.amazonaws.AmazonClientException: Unable to execute HTTP request: Connect to s3.amazonaws.com/s3.amazonaws.com/ timed out at hudson.plugins.s3.S3Profile.upload(S3Profile.java:85) at hudson.plugins.s3.S3BucketPublisher.perform(S3BucketPublisher.java:143)
Some work has been done on this.
http://purelyinstinctual.com/2013/03/18/automated-deployment-to-amazon-elastic-beanstalk-using-jenkins-on-ec2-part-2-guide/
Basically, this is just adding a post-build task to run the standard command line deployment scripts.
From the referenced page, assuming you have the post-build task plugin on Jenkins and the AWS command line tools installed:
STEP 1
In a Jenkins job configuration screen, add a “Post-build action” and choose the plugin “Publish artifacts to S3 bucket”, specify the Source (in our case, we use Maven so the source is target/.war and destination is your S3 bucket name)
STEP 2
Then, add a “Post-build task” (if you don’t have it, this is a plugin in Maven repo) to the same section above (“Post-build Actions”) and drag it below the “Publish artifacts to S3 bucket”. This is important that we want to make sure the war file is uploaded to S3 before proceeding with the scripts.
In the Post-build task portion, make sure you check the box “Run script only if all previous steps were successful”
In the script text area, put in the path of the script to automate the deployment (described in step 3 below). For us, we put something like this:
<path_to_script_file>/deploy.sh "$VERSION_NUMBER" "$VERSION_DESCRIPTION"
The $VERSION_NUMBER and $VERSION_DESCRIPTION are Jenkins’ build parameters and must be specified when a deployment is triggered. Both variables will be used for AEB deployment
STEP 3
The script
#!/bin/sh
export AWS_CREDENTIAL_FILE=<path_to_your aws.key file>
export PATH=$PATH:<path to bin file inside the "api" folder inside the AEB Command line tool (A)>
export PATH=$PATH:<path to root folder of s3cmd (B)>
//get the current time and append to the name of .war file that's being deployed.
//This will create a unique identifier for each .war file and allow us to rollback easily.
current_time=$(date +"%Y%m%d%H%M%S")
original_file="app.war"
new_file="app_$current_time.war"
//Rename the deployed war file with the new name.
s3cmd mv "s3://<your S3 bucket>/$original_file" "s3://<your S3 bucket>/$new_file"
//Create application version in AEB and link it with the renamed WAR file
elastic-beanstalk-create-application-version -a "Hoiio App" -l "$1" -d "$2" -s "<your S3 bucket>/$new_file"