How to load AWS credentials in Jenkins job DSL? - amazon-web-services

I have the following DSL structure:
freeStyleJob {
wrappers {
credentialsBinding {
[
$class:"AmazonWebServicesCredentialsBinding",
accessKeyVariable: "AWS_ACCESS_KEY_ID",
credentialsId: "your-credential-id",
secretKeyVariable: "AWS_SECRET_ACCESS_KEY"
]
}
}
steps {
// ACCESS AWS ENVIRONMENT VARIABLES HERE!
}
}
However, this does not work. What is the correct syntax to do so? For Jenkins pipelines, you can do:
withCredentials([[
$class: "AmazonWebServicesCredentialsBinding",
accessKeyVariable: "AWS_ACCESS_KEY_ID",
credentialsId: "your-credential-id",
secretKeyVariable: "AWS_SECRET_ACCESS_KEY"]]) {
// ACCESS AWS ENVIRONMENT VARIABLES HERE!
}
but this syntax does not work in normal DSL job groovy.
tl;dr how can I export AWS credentials defined by the AmazonWebServicesCredentialsBinding plugin into environment variables in Groovy job DSL? (NOT PIPELINE PLUGIN SYNTAX!)

I found a solution to solve this problem:
wrappers {
credentialsBinding {
amazonWebServicesCredentialsBinding {
accessKeyVariable("AWS_ACCESS_KEY_ID")
secretKeyVariable("AWS_SECRET_ACCESS_KEY")
credentialsId("your-credentials-id")
}
}
}
This will lead to the desired outcome.

I'm not able to re-use Miguel's solution (even with installed aws-credentials plugin), so here is another approach with DSL configure block
configure { project ->
def bindings = project / 'buildWrappers' / 'org.jenkinsci.plugins.credentialsbinding.impl.SecretBuildWrapper' / 'bindings'
bindings << 'com.cloudbees.jenkins.plugins.awscredentials.AmazonWebServicesCredentialsBinding' {
accessKeyVariable("AWS_ACCESS_KEY_ID")
secretKeyVariable("AWS_SECRET_ACCESS_KEY")
credentialsId("credentials-id")
}
}

This is the full detailed answer that #bitbrain did with possible fix for issue reported by #Viacheslav
freeStyleJob {
wrappers {
credentialsBinding {
amazonWebServicesCredentialsBinding {
accessKeyVariable("AWS_ACCESS_KEY_ID")
secretKeyVariable("AWS_SECRET_ACCESS_KEY")
credentialsId("your-credentials-id")
}
}
}
}
Ensure this is on the classpath for compilation:
compile "org.jenkins-ci.plugins:aws-credentials:1.23"
If you have tests running you might also need to add the plugin to the classpath:
testPlugins "org.jenkins-ci.plugins:aws-credentials:1.23"
I believe this is why there are reports of people needing to manually modify the XML to get this to work. Hint: if you can pass the compile stage (or compile in IDE) but not able to compile tests then this is the issue.

Related

C++ Drogon framework model-based ORM

I've began to learning C++ Drogon framework. I read the official and unofficial documents about the Drogon ORM. But I couldn't realized how can I create a model-based ORM database.
I want to create my models then run a migration command to map models to database tables.
If there is any document and guide about Drogon model-based ORM please let me know.
Thank you.
You can use
drogon::app().loadConfigFile("../config-name.json");
then run your sql command under drogon plugins after the program run. It's also will shutdown your custom plugin after the program exit.
It's require config files, where you can add your plugin on config-name.json.
steps:
1. create plugins
You can run from drogon_ctl create plugin <[namespace::]class_name> or create it manually.
Assume that your project root name is MyProject and says that your plugin class_name is my_plugin_01.
Do note that you also need to add both files to cmake add_executable.
add_executable(${PROJECT_NAME} "main.h" "main.cc"
...
"controllers/foo.h" "controllers/foo.cc"
"filters/bar.h" "filters/bar.cc"
...
"plugins/my_plugin_01.h" "plugins/my_plugin_01.cc")
2. add your plugin name to drogon loadConfigFile, which is config-name.json
{
"listeners": [
// your list of addresses and ports
],
...
// some of your config and so on
...
"plugins": [
{
"name": "MyProject::plugins::my_plugin_01",
"dependencies": [],
"config":
{
}
}
]
}
3. inside of my_plugin_01
my_plugin_01.h
#pragma once
#include <main.h>
#include <drogon/plugins/Plugin.h>
namespace MyProject
{
namespace plugins
{
class my_plugin_01 : public drogon::Plugin<my_plugin_01>
{
public:
my_plugin_01() {}
// This method must be called by drogon to initialize and start the plugin.
// It must be implemented by the user.
virtual void initAndStart(const Json::Value &config) override;
// This method must be called by drogon to shutdown the plugin.
// It must be implemented by the user.
virtual void shutdown() override;
};
} // namespace plugins
} // namespace MyProject
my_plugin_01.cc
#include "my_plugin_01.h"
using namespace drogon;
void MyProject::plugins::my_plugin_01::initAndStart(const Json::Value &config)
{
// Initialize and start the plugin
// Do your sql logic.
// Help: https://drogon.docsforge.com/master/database-general/database-dbclient/#execsqlasyncfuture
// log to console when DEBUG is true
if (MyProject::main::DEBUG)
{
std::cout << "\nPLUGIN INITIALIZE: my_plugin_01::initAndStart()\n\n";
}
}
void MyProject::plugins::my_plugin_01::shutdown()
{
// Shutdown the plugin
// Do your logic.
// log to console when DEBUG is true
if (MyProject::main::DEBUG)
{
std::cout << "\nPLUGIN SHUTDOWN: my_plugin_01::shutdown()\n\n";
}
}
Each time drogon::app().run(); your plugin/s will be initialze and terminate when program exit.
Another solution is use another framework/library from another programming language (e.g. dotnet, django and etc.)
You can do migration, create table and more from that framework/library.
This approach kinda neat but a bit complicated. Just be aware with your hardware resources.
Do this if you want to extend your logic, where heavy/intense workload use c++ and when some logic can not be achieved with c++, use another one.

GLB/GLTF File Loading with Storybook and Webpack with file-loader

I have a component library I am creating with Storybook that needs access to .glb/.gltf files. Based on research, it seemed like the best thing to do here was to use the file-loader Webpack functionality, and augment the storybook main.js as such:
// .storybook/main.js
module.exports = {
"stories": [
"../src/**/*.stories.mdx",
"../src/**/*.stories.#(js|jsx|ts|tsx)"
],
"addons": [
"#storybook/addon-links",
"#storybook/addon-essentials",
"#storybook/preset-create-react-app"
],
webpackFinal: async (config, { configType }) => {
config.module.rules.push({
test: /\.(glb|gltf)$/,
use: ['file-loader'],
include: path.resolve(__dirname, '../'),
});
return config;
},
};
Then, in my jsx file that references the mesh:
// src/components/MeshLoader.jsx
import MyMeshFile from "./meshes/MyMesh.glb";
import { useGLTF } from "#react-three/drei";
export default function Model(props) {
const group = useRef();
const { nodes, materials } = useGLTF(MyMeshFile);
// Do more stuff with these things
}
When I run compile, everything works, and if I log what MyMeshFile is, I get a path like:
static/media/MyMesh.976a5ad2.glb, as expected.
However, the rest breaks with an error Uncaught Unexpected token e in JSON at position 0, basically on account of the useGLTF function failing at the contents of that file.
It turns out that http://localhost:6006/static/media/MyMesh.976a5ad2.glb is actually a file with the contents of
export default __webpack_public_path__ + "178cb3da7737741d81a5d4f0c2bcc161.glb";
So it seems like there is some redirection happening. If I direct the browser at http://localhost:6006/178cb3da7737741d81a5d4f0c2bcc161.glb, I get the file I want.
My first question, is whether this is the expected behavior here, given the way I have things set up. If so, it seems like I would have to parse the contents of the file path given by Webpack, and use that to get the actual path. That seems to be a bit convoluted, so is there a better way of handling this?
Thanks for the help!
UPDATE:
I have tested with the gltf-webpack-loader loader, by adding the following to the .storybook/main.js file:
...
config.module.rules.push({
test: /\.(gltf)$/, // Removed gltf from file-loader
use: [{loader: "gltf-webpack-loader"}]
})
...
And tried the same thing with a gltf file. I get the same behavior of receiving the "redirect" file instead of the actual one I want.
So it turns out that there is currently a bug with "#storybook/preset-create-react-app" that is causing this issue. Removing that add-on seems to resolve the issue described here, although it does produce a warning that:
Storybook support for Create React App is now a separate preset.
WARN To use the new preset, install `#storybook/preset-create-react-app` and add it to the list of `addons` in your `.storybook/main.js` config file.
WARN The built-in preset has been disabled in Storybook 6.0.

How can I configure Jenkins using .groovy config file to set up 'Build strategies -> Tags' in my multi-branch pipeline?

I want something similar for 'Basic Branch Build Strategies' plugin https://plugins.jenkins.io/basic-branch-build-strategies
I figure out to make it something like this but it's not working:
def traits = it / sources / data / 'jenkins.branch.BranchSource' / source / traits
traits << 'com.cloudbees.jenkins.plugins.bitbucket.TagDiscoveryTrait' {
strategyId(3)
}
traits << 'jenkins.branch.buildstrategies.basic.TagBuildStrategyImpl' {
strategyId(1)
}
Here you can find full config file: https://gist.github.com/sobi3ch/170bfb0abc4b7d91a1f757a9db07decf
The first trait is working fine 'TagDiscoveryTrait' but second (my change) doesn't apply on Jenkins restart, 'TagBuildStrategyImpl'.
How can I configure 'Build strategies -> Tags' in .groovy config for my multibranch pipeline using 'Basic Branch Build Strategies' plugin?
UPDATE: Maybe I don't need to use traits at all. Maybe there is a simpler solution. I'm not expert in Jenkins groovy configuration.
UPDATE 2: This is scan log for my code https://gist.github.com/sobi3ch/74051b3e33967d2dd9dc7853bfb0799d
I am using the following Groovy init script to setup a Jenkins job with a "tag" build strategy.
def job = instance.createProject(WorkflowMultiBranchProject.class, "<job-name>")
PersistedList sources = job.getSourcesList()
// I am using Bitbucket, you need to replace this with your source
def pullRequestSource = new BitbucketSCMSource("<repo-owner>", "<repo-name>")
def source = new BranchSource(pullRequestSource)
source.setBuildStrategies([new TagBuildStrategyImpl(null, null)])
sources.add(source)
If I am recognizing the syntax correctly, the question is about Job DSL plugin.
The problem with the attempted solution is that the TagBuildStrategyImpl is not a Trait (known as Behavior in UI) but a Build Strategy. The error confirms this:
java.lang.ClassCastException: jenkins.branch.buildstrategies.basic.TagBuildStrategyImpl cannot be cast to jenkins.scm.api.trait.SCMSourceTrait
Class cannot be cast because TagBuildStrategyImpl does not extend SCMSourceTrait, it extends BranchBuildStrategy.
The best way to discover the JobDSL syntax applicable for a specific installation of Jenkins is to use the built-in Job DSL API Viewer. It is available under <jenkins-location>/plugin/job-dsl/api-viewer/index.html, e.g. https://ci.jenkins.io/plugin/job-dsl/api-viewer/index.html
On the version I am running what you are try to achieve would look approximately like this:
multibranchPipelineJob('foo') {
branchSources {
branchSource {
source {
bitbucket {
...
traits {
bitbucketTagDiscovery()
}
}
}
buildStrategies {
buildTags { ... }
}
}
}
}

In Jenkins "Job DSL Plugin", how to specify alternate location of pom.xml in 'mavenJob'?

I was looking at the instructions here and cannot figure out how to set a alternate pom.xml location of the Root POM, other than default.
https://jenkinsci.github.io/job-dsl-plugin/#path/mavenJob
Does anyone out there know how to set that?
You can use the rootPOM DSL method.
mavenJob('example') {
rootPOM('sub-module/pom.xml')
}
Try using the following code:
mavenJob('JobXXX') {
scm {
github('Repository/Project', 'master')
}
goals('clean compile build')
rootPOM('projectname/pom.xml')
}

Clean task does not clean up specified outputs.file

I wrote a build.gradle script to automatically download hazelcast from a given URL. Afterwards the file is unzipped and only the mancenter.war as well as the origin zip file remains in the destination directory. Later on this war file is taken referenced for a jetty run.
Nevertheless, although I defined outputs.file for two of my tasks, the files do not get cleaned up when I execute gradle clean. Thus I'd like to know what I have to do that the downloaded and unzipped files get removed when I execute gradle clean. Here is my script:
Btw., if you have any recommendations how to enhance the script, please don't hesitate to tell me!
apply plugin: "application"
dependencies {
compile "org.eclipse.jetty:jetty-webapp:${jettyVersion}"
compile "org.eclipse.jetty:jetty-jsp:${jettyVersion}"
}
ext {
distDir = "${projectDir}/dist"
downloadUrl = "http://download.hazelcast.com/download.jsp?version=hazelcast-${hazelcastVersion}"
zipFilePath = "${distDir}/hazelcast-${hazelcastVersion}.zip"
warFilePath = "${distDir}/mancenter-${hazelcastVersion}.war"
mainClass = "mancenter.MancenterBootstrap"
}
task downloadZip() {
outputs.file file(zipFilePath)
logging.setLevel(LogLevel.INFO)
doLast {
ant.get(src: downloadUrl, dest: zipFilePath)
}
}
task extractWar(dependsOn: downloadZip) {
outputs.file file(warFilePath)
logging.setLevel(LogLevel.INFO)
doLast {
ant.unzip(src: zipFilePath, dest: distDir, overwrite:"true") {
patternset( ) {
include( name: '**/mancenter*.war' )
}
mapper(type:"flatten")
}
}
}
task startMancenter(dependsOn: extractWar, type: JavaExec) {
main mainClass
classpath = sourceSets.main.runtimeClasspath
args warFilePath
}
UPDATE
I found this link, which describes how to provide additional locations to delete when the clean task is invoked. Basically you can do sth. like this:
clean{
delete zipFilePath
delete warFilePath
}
I got confirmation from the source code that the clean task simply deletes the build directory. It assumes that you want to clean everything, and that all the task outputs are somewhere in this build directory.
So, the easiest and best practice is to only store outputs somewhere under the build directory.
You can add tasks to clean like this:
clean.dependsOn(cleanExtractWar)
clean.dependsOn(cleanDownloadZip)
cleanTaskName is a virtual task will clear all outputs for TaskName.