Creating rpm postinstall script with different behavior for OBS, GBS and target installation - opensuse

I need to create rpm package, that when installed into OBS environment, GBS environment, and on target device will execute different postinstall scripts.
I'm looking for code in spec file like this:
%post
if ### check for OBS env ###
then action_one;
elif ### check for GBS env ###
then action_two;
else ### target device ###
action_tree;
fi
I'm using official Tizen OBS and Tizen developer tools. Though, I think It should look the same in OpenSUSE build system.
I could not find any documented way to do this and I'm not sure that it exists. But I found that postinstall scripts when executed from preinstall (Preinstall: and RunScripts: in prjconf) are located in /.init_b_cache/scripts/ directory, and it can be checked through $0 bash variable.
I wonder if there is a way to check for environment without setting package as preinstall.

Related

Sonarcloud c++ docker cmake

i was trying to integrate sonarcloud in my build.
I have created a free account in sonarcloud.io and added necessary steps in build pipeline.
When i ran the pipeline, i got the error
ERROR: Error during SonarScanner execution java.lang.IllegalStateException: java.nio.file.NoSuchFileException: /home/vsts/work/1/s/bw-outputs/build-wrapper-dump.json
The process '/home/vsts/work/_tasks/SonarCloudAnalyze_ce096e50-6155-4de8-8800-4221aaeed4a1/1.20.0/sonar-scanner/bin/sonar-scanner' failed with exit code 1
Also, i tried with a .properties file.
sonar.projectKey=jfzlma0838_dockersample
sonar.projectName=dockersample
sonar.projectVersion=1.0
sonar.sources=app
# The build-wrapper output dir
sonar.cfamily.build-wrapper-output=bw-outputs
# Encoding of the source files
sonar.sourceEncoding=UTF-8
full repo here (master)
The most likely cause of this error is that you did not run build wrapper.
Step 1. Download the Build Wrapper:
Build Wrapper for
Linux
Build Wrapper for
macOS
Build Wrapper for
Windows
Step 2. Unzip them and push them to your repository.
Step 3. Add their path to the enviornment variable PATH. You can use the following PowerShell script:
Write-Host "##vso[task.setvariable variable=PATH;]${env:PATH};$newPath";
Note that the path of repository is $(System.DefaultWorkingDirectory) in Azure DevOps.
Step 4. Execute Build Wrapper. You can click this document for detailed steps.

how to use ros | cmake with gitlab-ci

I have a simple project but i have no experiences with GitLab-ci. I use ROS and cmake to build my project on my local machine(ubuntu18-04).
now I want to build my project on GitLab but this doesn't look easy for me.
Steps:
1-) installed binary runners from here
2-) registered runners from here for Linux
- used docker as an executor (like gitlabci; i have no experience with docker)
- selected ruby:2.6 default image
3-) Now' i can see my runner under Settings > CI/CD -> Runners
4-) Creating example .yml provided from gitlab
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
test-job1:
stage: test
script:
- echo "This job tests something"
test-job2:
stage: test
script:
- echo "This job tests something, but takes more time than test-job1."
- echo "After the echo commands complete, it runs the sleep command for 20 seconds"
- echo "which simulates a test that runs 20 seconds longer than test-job1"
- sleep 20
deploy-prod:
stage: deploy
script:
- echo "This job deploys something from the $CI_COMMIT_BRANCH branch."
And its working!
But now I want to build my code on gitlab-ci. File structure:
scripts
->build.sh
src
->Cmakelist.txt
->codes.cpp
binaries
->outputs ll be here.
.gitlab-ci.yml
build.sh doing everything that I want:
...
mkdir build
cd build
cmake .. -GNinja
ninja
So I need just run it. But I don't know how to install prerequisites.
Which system exactly am I working on right now and how can I install prerequisites? (ubuntu 18.04 - docker - runner.. I just got mixed up )
Which system exactly am I working on right now
You clicked selected ruby:2.6 default image so you are on ruby:2.6. You may then browse docker hub : https://hub.docker.com/_/ruby and dockerfile https://github.com/docker-library/ruby/blob/8e49e25b591d4cfa6324b6dada4f16629a1e51ce/2.6/buster/Dockerfile - I see it has "buster" which is the name of one of debian releases, so I guess it's debian.
how can I install prerequisites?
That depends on the image your are using, different linux distributions use different package mangers. I usually take a look at wiki package manager.
You could just something like:
build:
image: ubuntu
script:
- apt-get install -y cmake gcc whatever-else-like-you-have-on-your-machine
- ./scripts/build.sh

DPDK pktgen run.py error - Config file 'default' not found

I am learning pktgen based on DPDK. There is no problem during the DPDK's process(for example, installation, allocating large pages, binding NICs, running DPDK sample programs, etc.). I follow Getting Started with Pktgen and when I run run.py it shows an empty list of config files. Then I try to execute the instructions on the doc, I get an error says "Config file 'default' not found".
root#ubuntu:/home/chang/pktgen-dpdk/tools# ./run.py
>>> sdk '/home/chang/dpdk', target 'x86_64-native-linuxapp-gcc'
*** Pick one of the following config files
Configurations:
Name - Description
---- -----------
root#ubuntu:/home/chang/pktgen-dpdk/tools# ./run.py -s default
>>> sdk '/home/chang/dpdk', target 'x86_64-native-linuxapp-gcc'
*** Config file 'default' not found
Configurations:
Name - Description
---- -----------
root#ubuntu:/home/chang/pktgen-dpdk/tools#
But in fact, the configuration file does exist in /pktgen-dpdk/cfg
root#ubuntu:/home/chang/pktgen-dpdk/cfg# ls
bond.cfg default.cfg pktgen-1.cfg server_mif.cfg
client_memif.cfg half-bond.cfg pktgen-2.cfg
client_mif.cfg many-cores.cfg server_memif.cfg
My system is Ubuntu 18.04 installed on VMware. I tried to re-clone the code, re-compile dpdk and pktgen, tried to install it on another machine, but got the same error.
Thanks in advance!
Looks like we have to run the script from the main project directory, i.e. do cd .. and try again. Here is the quote from the README.md
Using the new tools/run.py script to setup and run pktgen with different configurations. The configuration files are located in the cfg directory with filenames ending in .cfg.
To use a configuration file;
$ ./tools/run.py -s default # to setup the ports and attach them to DPDK (only needed once per boot)
$ ./tools/run.py default # Run the default configuration

How to install Google or-tools on AWS Lambda?

I've been successfully using Google's or-tools on AWS EC2 instances but have recently been looking into including them in AWS Lambda functions but can't get it to run.
Function debug.py
Below is just a basic function importing the pywrapcp from ortools which should succeed if everything is set up correctly.
from ortools.constraint_solver import pywrapcp
def handler(event, context):
print(pywrapcp)
if __name__ == '__main__':
handler(None, None)
Failing Module Import
I created a package.sh script that copies all dependencies to the project following Amazon's instructions before creating a zip archive. Running the deployed code results in this:
Unable to import module 'debug': No module named ortools.constraint_solver
Contents of package.sh
#!/bin/bash
DEST_DIR=$(dirname $(realpath -s $0));
echo "Copy all native libraries...";
mkdir -p ./lib && find $directory -type f -name "*.so" | xargs cp -t ./lib;
echo "Create package...";
zip -r dist.zip debug.py lib;
rm -r ./lib;
echo "Add dependencies from $VIRTUAL_ENV to $DEST_DIR/dist.zip";
cd $VIRTUAL_ENV/lib/python2.7/site-packages;
zip -ur $DEST_DIR/dist.zip ./** -x;
When I copy the ortools folder from ortools-4.4.3842-py2.7-linux-x86_64.egg directly into the project root it finds ortools but then fails to import pywrapcp which may be related to a failure loading the native libraries but I'm not sure since the logs don't show much detail.
Unable to import module 'debug': cannot import name pywrapcp
Any ideas?
Following the discussion on Google or-tools I put together a packaging script that works around the issues installing the dependencies in a way that works for AWS Lambda.
They key part of it is that the contents of the egg packages have to be copied manually to the Lambda project folder and been given the correct permission for them the be accessible during runtime.
#!/bin/sh
easy_install3 py3-ortools
find "/opt/python3/lib/python3.6/site-packages" -path "*.egg/*" -not -name "EGG-INFO" -maxdepth 2 -exec cp -r {} ./dist \;
chmod -R 755 ./dist
Instead of creating and configuring an EC2 instance, you can use Docker to create a deployable package locally, see or-tools-lambda for details.
Firstly, the underlying AWS Lambda execution environment is Amazon Linux while or-tools is not tested beyond the below environments as per https://github.com/google/or-tools
Ubuntu 14.04 and 16.04 up (64-bit).
Mac OS X El Capitan with Xcode 7.x (64 bit).
Microsoft Windows with Visual Studio 2013 and 2015 (64-bit)
Test your code by launching an instance with one of the ami's which aws lambda uses in the list here (http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html)
If it works, use pip to install dependencies/libraries at the root level of your project directory and then zip. Do not copy the libraries manually to your project directory

Why doesn't my custom recipes run on AWS OpsWorks?

I've created a GitHub repo for my simple custom recipe:
my-cookbook/
|- recipes/
|- appsetup.rb
I've added the repo to Custom Chef Recipes as https://github.com/my-github-user/my-github-repo.git
I've added my-cookbook::appsetup to the Setup "cycle".
I know it's executed, because it fails to load if I mess up the syntax.
This is my appsetup.rb:
node[:deploy].each do |app_name, deploy|
script "install_composer" do
interpreter "bash"
user "root"
cwd "#{deploy[:deploy_to]}/current"
code "curl -sS https://getcomposer.org/installer | php && php composer.phar install --no-dev"
end
end
When I log into the instance by SSH with the ubuntu user, composer isn't installed.
I've also tried the following to no avail (A nodejs install):
node[:deploy].each do |app_name, deploy|
execute "installing node" do
command "add-apt-repository --yes ppa:chris-lea/node.js && apt-get update && sudo apt-get install python-software-properties python g++ make nodejs"
end
end
Node doesn't get installed, and there are no errors in the log. The only references to the cookbook in the log just says:
[2014-03-31T13:26:04+00:00] INFO: OpsWorks Custom Run List: ["opsworks_initial_setup", "ssh_host_keys", "ssh_users", "mysql::client", "dependencies", "ebs", "opsworks_ganglia::client", "opsworks_stack_state_sync", "mod_php5_apache2", "my-cookbook::appsetup", "deploy::default", "deploy::php", "test_suite", "opsworks_cleanup"]
...
2014-03-31T13:26:04+00:00] INFO: New Run List expands to ["opsworks_initial_setup", "ssh_host_keys", "ssh_users", "mysql::client", "dependencies", "ebs", "opsworks_ganglia::client", "opsworks_stack_state_sync", "mod_php5_apache2", "my-cookbook::appsetup", "deploy::default", "deploy::php", "test_suite", "opsworks_cleanup"]
...
[2014-03-31T13:26:05+00:00] DEBUG: Loading Recipe my-cookbook::appsetup via include_recipe
[2014-03-31T13:26:05+00:00] DEBUG: Found recipe appsetup in cookbook my-cookbook
Am I missing some critical step somewhere? The recipe is clearly recognized and loaded, but doesn't seem to be executed.
(The following are fictitious names: my-github-user, my-github-repo, my-cookbook)
I know you've abandoned the cookbook but I'm almost 100% sure it's because you don't have a metadata.rb file in the root of your cookbook.
Your cookbook name should not contain a dash. I had the same problem, replacing by '_' solved it for me.
If those commands are failing silently, it could be that your use of && is obscuring a failure.
As for add-apt-repository, that is an interactive command. Try using the "--yes" option to answer yes by default, making it no longer interactive.
If you do not execute your command successfully, you will not find the files in the current directory. Check inside the last release folder to see if it had been put there.
It maybe prudent to check if you got the right directory etc setup by changing the CWD to : /tmp