conda-build fails to recognise libraries - c++

I am trying to build a package using conda-build. This is my first time trying to build a conda package, and I am having far more problems than I expected too.
Here is my build.sh
CMAKE_PLATFORM_FLAGS+=(-DCMAKE_TOOLCHAIN_FILE="${RECIPE_DIR}/cross-linux.cmake")
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_INSTALL_PREFIX=$PREFIX ${CMAKE_PLATFORM_FLAGS[#]} .
make
cp surveyor.py random_pos_generator.py $PREFIX/bin/
cp add_filtering_info call_insertions clip_consensus_builder dc_remapper filter normalise reads_categorizer $PREFIX/bin/
And here is my meta.yaml:
{% set name = "insurveyor" %}
{% set version = "1.0.1" %}
package:
name: {{ name|lower }}
version: {{ version }}
source:
url: https://github.com/kensung-lab/INSurVeyor/archive/refs/tags/1.0.2.tar.gz
sha256: 33c85157892d3256abc96bb2a9053f05da9dcf55befeed720e965577db0b78b5
build:
skip: True # [not linux]
number: 0
requirements:
build:
- {{ compiler('c') }}
- {{ compiler('cxx') }}
- cmake >=3.5
- autoconf ==2.69
host:
- libcurl
- bzip2
- xz
- zlib
- libdeflate
- openssl
- htslib >=1.13
run:
- python
- numpy >=1.21.2
test:
source_files:
- demo/
about:
home: https://github.com/kensung-lab/INSurVeyor
summary: 'An insertion caller for Illumina paired-end WGS data.'
description: XXX
license: GPL-3.0-only
license_file: LICENSE
extra:
recipe-maintainers:
- Mesh89
And here is the output of the process:
https://justpaste.it/3gfzf
(stack overflow would think the log is spam)
There are plenty of worrying messages, for example:
Warning: rpath /home/user/anaconda3/conda-bld/insurveyor_1670384223059/_build_env/lib is outside prefix /home/user/anaconda3/conda-bld/insurveyor_1670384223059/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_ (removing it)
Unknown format
WARNING :: Failed to get_static_lib_exports(/home/user/anaconda3/conda-bld/insurveyor_1670384223059/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/libssl.a)
Unknown format
Binary doesn't have a program header
When I try to run any executable, I get
symbol lookup error: /lib/x86_64-linux-gnu/libp11-kit.so.0: undefined symbol: ffi_type_pointer, version LIBFFI_BASE_7.0
I have spent a long time trying to figure out what is wrong and I am completely lost. When I google the warning/error messages, I cannot find any results.
I wonder if there is something fundamental that I misunderstand about the build process.
For example, why does conda complain about the rpath of the executables pointing to the build environment? Why those "Unknown format" warnings regarding the static libraries?

Related

Github Actions Failing for setup-gcloud

My previous deployments with same github workflow file were successful.
Suddenly today, I get this error in Github Actions while trying to deploy.
May I know how to fix this?
Run google-github-actions/setup-gcloud#v0
24
/usr/bin/tar xz --warning=no-unknown-keyword --overwrite -C /home/runner/work/_temp/fa0cd935-fe7e-4593-8662-69259b4b00a0 -f /home/runner/work/_temp/52901a76-e32d-4cdf-92e4-83836f8c5362
25
Warning: "service_account_key" has been deprecated. Please switch to using google-github-actions/auth which supports both Workload Identity Federation and Service Account Key JSON authentication. For more details, see https://github.com/google-github-actions/setup-gcloud#authorization
26
Error: google-github-actions/setup-gcloud failed with: failed to execute command `gcloud --quiet auth activate-service-account *** --key-file -`: /opt/hostedtoolcache/gcloud/270.0.0/x64/lib/googlecloudsdk/core/console/console_io.py:544: SyntaxWarning: "is" with a literal. Did you mean "=="?
27
if answer is None or (answer is '' and default is not None):
28
/opt/hostedtoolcache/gcloud/270.0.0/x64/lib/third_party/ipaddress/__init__.py:1106: SyntaxWarning: 'str' object is not callable; perhaps you missed a comma?
29
raise TypeError("%s and %s are not of the same version" (a, b))
30
ERROR: gcloud failed to load: module 'collections' has no attribute 'MutableMapping'
31
gcloud_main = _import_gcloud_main()
32
import googlecloudsdk.gcloud_main
33
from googlecloudsdk.calliope import base
34
from googlecloudsdk.calliope import display
35
from googlecloudsdk.calliope import display_taps
36
from googlecloudsdk.core.resource import resource_printer_base
37
from googlecloudsdk.core.resource import resource_projector
38
from google.protobuf import json_format as protobuf_encoding
39
from google.protobuf import symbol_database
40
from google.protobuf import message_factory
41
from google.protobuf import reflection
42
from google.protobuf.internal import python_message as message_impl
43
from google.protobuf.internal import containers
44
MutableMapping = collections.MutableMapping
45
46
This usually indicates corruption in your gcloud installation or problems with your Python interpreter.
47
48
Please verify that the following is the path to a working Python 2.7 executable:
49
/usr/bin/python
50
51
If it is not, please set the CLOUDSDK_PYTHON environment variable to point to a working Python 2.7 executable.
52
53
If you are still experiencing problems, please reinstall the Cloud SDK using the instructions here:
54
https://cloud.google.com/sdk/
I fixed it by using below lines in workflow yml before uses: google-github-actions/setup-gcloud#v0
- run: |
sudo apt-get install python2.7
export CLOUDSDK_PYTHON="/usr/bin/python2"
In my case setting the version to 318.0.0 also fixed the issue.
name: Set up gcloud
uses: google-github-actions/setup-gcloud#v0
with:
version: '318.0.0'
service_account_email: ${{ secrets.GCP_SA_EMAIL }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
Based on the info at: AttributeError: module 'importlib' has no attribute 'util'
Answer above did not work for us, however, we're able to identify and fix the the problem by doing followings.
Problem: The python version used in our action was somehow 3.10.3 which is not compliant with gcloud cli.
Official gcloud docs says:
The gcloud CLI runs under Python. Note that gcloud requires Python version 3.5-3.9
Solution: We've updated github workflow definition to setup a supported version of the python by the gcloud cli.
- name: Setup python
uses: actions/setup-python#v4
with:
python-version: '3.9'
- name: Export gcloud related env variable
run: export CLOUDSDK_PYTHON="/usr/bin/python3"
I ran into the same issue. Seems it's a gcloud and Python versioning issue.
Here is what solved it for me:
my workflow yml before the fix:
- uses: google-github-actions/setup-gcloud#v0
with:
version: '270.0.0'
service_account_email: ${{ secrets.SECRET_NAME }}
service_account_key: ${{ secrets.SECRET_NAME }}
after the fix:
- name: Set up Python
uses: actions/setup-python#v4
with:
python-version: '3.9'
- uses: google-github-actions/setup-gcloud#v0
with:
version: '318.0.0'
service_account_email: ${{ secrets.SECRET_NAME }}
service_account_key: ${{ secrets.SECRET_NAME }}

Salt states. If variables have some word in stdout

There is a web page with a large piece of text on it.
I want to configure the state to perform a certain action if curl returns an error.
If the variable doesn't contain 'StatusDescription : OK'
How can I set up a check for a piece of text that is inside a variable
{% set seqstat = salt['cmd.run']('powershell.exe curl http://127.0.0.1:5001 -UseBasicParsing') %}
{% if seqstat is sameas '*StatusDescription : OK*' %}
module_run:
cmd.run:
- name: 'powershell.exe WRITE-HOST have no Error'
{% else %}
module_run1:
cmd.run:
- name: 'powershell.exe WRITE-HOST have Error'
{%- endif -%}
Salt Version:
Salt: 3002.1
Dependency Versions:
cffi: 1.12.2
cherrypy: unknown
dateutil: 2.7.3
docker-py: 3.4.1
gitdb: 2.0.5
gitpython: 2.1.11
Jinja2: 2.10
libgit2: 0.27.7
M2Crypto: Not Installed
Mako: 1.0.7
msgpack-pure: Not Installed
msgpack-python: 0.5.6
mysql-python: 1.3.10
pycparser: 2.19
pycrypto: 2.6.1
pycryptodome: 3.6.1
pygit2: 0.27.4
Python: 3.7.3 (default, Jul 25 2020, 13:03:44)
python-gnupg: 0.4.4
PyYAML: 3.13
PyZMQ: 17.1.2
smmap: 2.0.5
timelib: Not Installed
Tornado: 4.5.3
ZMQ: 4.3.1
System Versions:
dist: debian 10 buster
locale: UTF-8
machine: x86_64
release: 4.19.0-6-amd64
system: Linux
version: Debian GNU/Linux 10 buster
I want to configure the state to perform a certain action if curl returns an error.
There is a Salt state called http which can query a URL and return the status. Using this (instead of curl) we can check for the status code(s) (200, 201, etc.), as well as matching text. Then we can use requisites to run subsequent states depending on the success/failure of the http.query.
Example:
I have added a check for status code of 200, you can omit - status: 200 if you don't care about the status code.
check-application:
http.query:
- name: http://127.0.0.1:5001
- status: 200
- match: 'StatusDescription : OK'
app-running:
cmd.run:
- name: 'powershell.exe WRITE-HOST have no Error'
- require:
- http: check-application
app-not-running:
cmd.run:
- name: 'powershell.exe WRITE-HOST have Error'
- onfail:
- http: check-application

Adding paths to config file in most efficent way via Ansible

I wrote a task that is responsible for changing supervisor config file. The case is that on some servers we have more than one app running workers, so sometimes more than one path needs to be added to include section of supervisor.conf.
Currently I wrote this task in /roles/supervisor/tasks/main.yml/:
- name: Add apps paths in include section
lineinfile:
dest: /etc/supervisor/supervisord.conf
regex: '^files ='
line: 'files = /etc/supervisor/conf.d/*.conf /home/app/{{ app_name }}/releases/app/shared/supervisor/*.conf /home/dev/{{ app_name2 }}/releases/dev/shared/supervisor/*.conf'
when: ansible_hostname = 'ser-db-10'
notify: restart supervisor
tags: multi_workers
... and added in /roles/supervisor/defaults/main.yml/ this:
app_name: bla
app_name2: blabla
It works, but I don't like the thing that there are two application paths hardcoded in line and maybe I should also add variable in place of ser-db-10.
I am wondering how to rebuild this task to make it more independent.
What I mean is, if there are 4 apps, add 4 paths, if there are 2 apps, add 2 paths.
What is the most efficient way to do this?
As an example of how to put together the parameter line, the play below
- hosts: test_01
vars:
app_name1: A
app_name2: B
my_conf:
test_01:
lines:
- '/etc/*.conf'
- '/etc/{{ app_name1 }}/*.conf'
- '/etc/{{ app_name2 }}/*.conf'
tasks:
- debug:
msg: "files = {{ my_conf[inventory_hostname].lines|join(' ') }}"
gives
"msg": "files = /etc/*.conf /etc/A/*.conf /etc/B/*.conf"
With appropriate dictionary my_conf the task below should do the job
- name: Add apps paths in include section
lineinfile:
dest: /etc/supervisor/supervisord.conf
regex: '^files ='
line: "files = {{ my_conf[inventory_hostname].lines|join(' ') }}"
notify: restart supervisor
tags: multi_workers
(not tested)

Ansible INI lookup plugin gives ['<element>'], instead of <element>

I have a users.ini file having below content:
[integration]
# My integration information
user=gertrude
pass=anotherpassword
I am trying to fetch the value in my below yml file using lookup plugin for INI:
- hosts: "{{vnf_ip}}"
connection: local
tasks:
debug: msg="User in integration is {{ lookup('ini', 'user section=integration file=users.ini') }}"
But I am getting output as
TASK [debug] ***********************************************************************************************************************************
ok: [10.10.10.10] => {
"msg": "User in integration is ['gertrude']"
}
Instead of ['gertrude'] it should simply be gertrude.
How to get gertrude simply????
What Ansible version do you use? On modern 2.3.2 it works as expected and returns just gertrude.
If you can't upgrade, you can use first filter to get an element from your resulting list:
{{ lookup('ini', 'user section=integration file=users.ini') | first }}

Media folder permission while deploying django app with Ansible

I am deploying with Ansible, in task I do check whether a folder exists or not for e.g. log, assets etc. and give them 755 permission
- name: Ensure source, static, media and logs directories exist
file: dest={{ item }} mode=755 owner={{ app_user }} group={{ app_group }} state=directory
with_items:
- "{{ static_dir }}"
- "{{ media_dir }}"
- "{{ logs_dir }}"
I am running the app with app_user and who is in the group of apache so my all files and directories have app_user:apache permission.
With the above permission I'am not able to upload files to media directory, but when I give chown -R g+w media permission to media directory uploads happens, but then ansible stops working as media gets apache:apache permission.
How do I resolve this issue, what permission do I give to media folder?
My django project resides in /var/www/www.example.com/ and media is in /var/www/www.example.com/src/media/
www.example.com folder has app_user:apache chown.
The Ansible file module needs the full octal number supplied to the mode parameter, rather than the shorthand 3 digit version we are used to using with the chmod command.
As mentioned on http://docs.ansible.com/ansible/file_module.html, "Leaving off the leading zero will likely have unexpected results.".
Try:
file: dest={{ item }} mode=0755 owner={{ app_user }} group={{ app_group }} state=directory
Hope that helps.