My previous deployments with same github workflow file were successful.
Suddenly today, I get this error in Github Actions while trying to deploy.
May I know how to fix this?
Run google-github-actions/setup-gcloud#v0
24
/usr/bin/tar xz --warning=no-unknown-keyword --overwrite -C /home/runner/work/_temp/fa0cd935-fe7e-4593-8662-69259b4b00a0 -f /home/runner/work/_temp/52901a76-e32d-4cdf-92e4-83836f8c5362
25
Warning: "service_account_key" has been deprecated. Please switch to using google-github-actions/auth which supports both Workload Identity Federation and Service Account Key JSON authentication. For more details, see https://github.com/google-github-actions/setup-gcloud#authorization
26
Error: google-github-actions/setup-gcloud failed with: failed to execute command `gcloud --quiet auth activate-service-account *** --key-file -`: /opt/hostedtoolcache/gcloud/270.0.0/x64/lib/googlecloudsdk/core/console/console_io.py:544: SyntaxWarning: "is" with a literal. Did you mean "=="?
27
if answer is None or (answer is '' and default is not None):
28
/opt/hostedtoolcache/gcloud/270.0.0/x64/lib/third_party/ipaddress/__init__.py:1106: SyntaxWarning: 'str' object is not callable; perhaps you missed a comma?
29
raise TypeError("%s and %s are not of the same version" (a, b))
30
ERROR: gcloud failed to load: module 'collections' has no attribute 'MutableMapping'
31
gcloud_main = _import_gcloud_main()
32
import googlecloudsdk.gcloud_main
33
from googlecloudsdk.calliope import base
34
from googlecloudsdk.calliope import display
35
from googlecloudsdk.calliope import display_taps
36
from googlecloudsdk.core.resource import resource_printer_base
37
from googlecloudsdk.core.resource import resource_projector
38
from google.protobuf import json_format as protobuf_encoding
39
from google.protobuf import symbol_database
40
from google.protobuf import message_factory
41
from google.protobuf import reflection
42
from google.protobuf.internal import python_message as message_impl
43
from google.protobuf.internal import containers
44
MutableMapping = collections.MutableMapping
45
46
This usually indicates corruption in your gcloud installation or problems with your Python interpreter.
47
48
Please verify that the following is the path to a working Python 2.7 executable:
49
/usr/bin/python
50
51
If it is not, please set the CLOUDSDK_PYTHON environment variable to point to a working Python 2.7 executable.
52
53
If you are still experiencing problems, please reinstall the Cloud SDK using the instructions here:
54
https://cloud.google.com/sdk/
I fixed it by using below lines in workflow yml before uses: google-github-actions/setup-gcloud#v0
- run: |
sudo apt-get install python2.7
export CLOUDSDK_PYTHON="/usr/bin/python2"
In my case setting the version to 318.0.0 also fixed the issue.
name: Set up gcloud
uses: google-github-actions/setup-gcloud#v0
with:
version: '318.0.0'
service_account_email: ${{ secrets.GCP_SA_EMAIL }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
Based on the info at: AttributeError: module 'importlib' has no attribute 'util'
Answer above did not work for us, however, we're able to identify and fix the the problem by doing followings.
Problem: The python version used in our action was somehow 3.10.3 which is not compliant with gcloud cli.
Official gcloud docs says:
The gcloud CLI runs under Python. Note that gcloud requires Python version 3.5-3.9
Solution: We've updated github workflow definition to setup a supported version of the python by the gcloud cli.
- name: Setup python
uses: actions/setup-python#v4
with:
python-version: '3.9'
- name: Export gcloud related env variable
run: export CLOUDSDK_PYTHON="/usr/bin/python3"
I ran into the same issue. Seems it's a gcloud and Python versioning issue.
Here is what solved it for me:
my workflow yml before the fix:
- uses: google-github-actions/setup-gcloud#v0
with:
version: '270.0.0'
service_account_email: ${{ secrets.SECRET_NAME }}
service_account_key: ${{ secrets.SECRET_NAME }}
after the fix:
- name: Set up Python
uses: actions/setup-python#v4
with:
python-version: '3.9'
- uses: google-github-actions/setup-gcloud#v0
with:
version: '318.0.0'
service_account_email: ${{ secrets.SECRET_NAME }}
service_account_key: ${{ secrets.SECRET_NAME }}
Related
I am trying to build a package using conda-build. This is my first time trying to build a conda package, and I am having far more problems than I expected too.
Here is my build.sh
CMAKE_PLATFORM_FLAGS+=(-DCMAKE_TOOLCHAIN_FILE="${RECIPE_DIR}/cross-linux.cmake")
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_INSTALL_PREFIX=$PREFIX ${CMAKE_PLATFORM_FLAGS[#]} .
make
cp surveyor.py random_pos_generator.py $PREFIX/bin/
cp add_filtering_info call_insertions clip_consensus_builder dc_remapper filter normalise reads_categorizer $PREFIX/bin/
And here is my meta.yaml:
{% set name = "insurveyor" %}
{% set version = "1.0.1" %}
package:
name: {{ name|lower }}
version: {{ version }}
source:
url: https://github.com/kensung-lab/INSurVeyor/archive/refs/tags/1.0.2.tar.gz
sha256: 33c85157892d3256abc96bb2a9053f05da9dcf55befeed720e965577db0b78b5
build:
skip: True # [not linux]
number: 0
requirements:
build:
- {{ compiler('c') }}
- {{ compiler('cxx') }}
- cmake >=3.5
- autoconf ==2.69
host:
- libcurl
- bzip2
- xz
- zlib
- libdeflate
- openssl
- htslib >=1.13
run:
- python
- numpy >=1.21.2
test:
source_files:
- demo/
about:
home: https://github.com/kensung-lab/INSurVeyor
summary: 'An insertion caller for Illumina paired-end WGS data.'
description: XXX
license: GPL-3.0-only
license_file: LICENSE
extra:
recipe-maintainers:
- Mesh89
And here is the output of the process:
https://justpaste.it/3gfzf
(stack overflow would think the log is spam)
There are plenty of worrying messages, for example:
Warning: rpath /home/user/anaconda3/conda-bld/insurveyor_1670384223059/_build_env/lib is outside prefix /home/user/anaconda3/conda-bld/insurveyor_1670384223059/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_ (removing it)
Unknown format
WARNING :: Failed to get_static_lib_exports(/home/user/anaconda3/conda-bld/insurveyor_1670384223059/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/libssl.a)
Unknown format
Binary doesn't have a program header
When I try to run any executable, I get
symbol lookup error: /lib/x86_64-linux-gnu/libp11-kit.so.0: undefined symbol: ffi_type_pointer, version LIBFFI_BASE_7.0
I have spent a long time trying to figure out what is wrong and I am completely lost. When I google the warning/error messages, I cannot find any results.
I wonder if there is something fundamental that I misunderstand about the build process.
For example, why does conda complain about the rpath of the executables pointing to the build environment? Why those "Unknown format" warnings regarding the static libraries?
I have to run a spark job, (I am new to spark) and getting following error-
[2022-02-16 14:47:45,415] {{bash.py:135}} INFO - Tmp dir root location: /tmp
[2022-02-16 14:47:45,416] {{bash.py:158}} INFO - Running command: spark-submit --class org.xyz.practice.driver.PractitionerDriver s3://pfdt-poc-temp/xyz_test/org.xyz.spark-xy_mvp-1.0.0-SNAPSHOT.jar
[2022-02-16 14:47:45,422] {{bash.py:169}} INFO - Output:
[2022-02-16 14:47:45,423] {{bash.py:173}} INFO - bash: spark-submit: command not found
[2022-02-16 14:47:45,423] {{bash.py:177}} INFO - Command exited with return code 127
[2022-02-16 14:47:45,437] {{taskinstance.py:1482}} ERROR - Task failed with exception
What has to be done,
def run_spark(**kwargs):
import pyspark
sc = pyspark.SparkContext()
df = sc.textFile('s3://demoairflowpawan/people.txt')
logging.info('Number of lines in people.txt = {0}'.format(df.count()))
sc.stop()
spark_task = BashOperator(
task_id='spark_java',
bash_command='spark-submit --class {{ params.class }} {{ params.jar }}',
params={'class': 'org.xyz.practice.driver.PractitionerDriver', 'jar': 's3://pfdt-poc-temp/xyz_test/org.xyz.spark-xy_mvp-1.0.0-SNAPSHOT.jar'},
dag=dag
)
The question is - why do you expect the spark-submit to be there?
If you created the airflow default pods, then they come with airflow code only.
You can check here an example for spark and airflow - https://medium.com/codex/executing-spark-jobs-with-apache-airflow-3596717bbbe3 - and they state specifically "Spark binaries must be added and mapped".
So you need to figure out how to download the spark binaries to the existing airflow pod.
Alternatively - you can create another k8s job which will do the spark-submit, and have your DAG activate this job.
sorry for the high level answer...
I have the following config:
steps:
- name: 'alpine'
args: ['echo', 'B: ${_BRANCH}', 'T: ${_TAG}', 'C => ${_CLIENT}']
If I run with:
gcloud builds submit --config=gcp/cloudbuild-main.yaml --substitutions _CLIENT='client',_BRANCH='branch',_TAG='tag' .
I get the following message:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: generic::invalid_argument: key in the template "_BRANCH" is not matched in the substitution data; substitutions = map[_CLIENT:client _BRANCH=branch _TAG=tag];key in the template "_TAG" is not matched in the substitution data; substitutions = map[_CLIENT:client _BRANCH=branch _TAG=tag];key "_CLIENT" in the substitution data is not matched in the template
If I declare the substitutions:
steps:
- name: 'alpine'
args: ['echo', 'B: ${_BRANCH}', 'T: ${_TAG}', 'C => ${_CLIENT}']
substitutions:
_BRANCH: b1
_TAG: latest
_CLIENT: c
It runs but the substitutions take only the first variable and other become values of it:
BUILD
Pulling image: alpine
Using default tag: latest
latest: Pulling from library/alpine
Digest: sha256:21a3deaa0d32a8057914f36584b5288d2e5ecc984380bc0118285c70fa8c9300
Status: Downloaded newer image for alpine:latest
docker.io/library/alpine:latest
B: b1 T: latest C => client _BRANCH=branch _TAG=tag
PUSH
DONE
There's a syntax nit in your command which should be resolved by:
gcloud builds submit --config=gcp/cloudbuild-main.yaml --substitutions=_CLIENT="client",_BRANCH="branch",_TAG="tag" .
After submitting the build:
B: branch T: tag C => client
Reference: https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values
I've installed Argo on a managed k8 service following the guidelines here.
When i launch the following example task i get an error (if you have argo installed you should be able to copy paster the below code):
# create a.yml
cat >> a.yml<<EOL
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world- # Name of this Workflow
spec:
entrypoint: whalesay # Defines "whalesay" as the "main" template
templates:
- name: whalesay # Defining the "whalesay" template
container:
image: docker/whalesay
command: [cowsay]
args: ["hello world"] # This template runs "cowsay" in the "whalesay" image with arguments "hello world"
EOL
# submit a.yml
argo --insecure-skip-tls-verify --insecure-skip-verify -n argo submit a.yml
# monitor
$ argo list
# NAME STATUS AGE DURATION PRIORITY
# hello-world-hxrcp Succeeded 4m 10s 0
argo watch --insecure-skip-tls-verify --insecure-skip-verify -v -n argo hello-world-hxrcp
# DEBU[2021-06-09T19:37:22.125Z] CLI version version="{v3.0.7 2021-05-25T18:57:09Z e79e7ccda747fa4487bf889142c744457c26e9f7 v3.0.7 clean go1.16.3 gc linux/amd64}"
# DEBU[2021-06-09T19:37:22.125Z] Client options opts="(argoServerOpts=(url=127.0.0.1:2746,path=,secure=true,insecureSkipVerify=true,http=true),instanceID=)"
# DEBU[2021-06-09T19:37:22.125Z] curl -H 'Accept: text/event-stream' -H 'Authorization: ******' 'https://127.0.0.1:2746/api/v1/workflow-events/argo?listOptions.fieldSelector=metadata.name%3Dhello-world-hxrcp&listOptions.resourceVersion=0'
# FATA[2021-06-09T19:37:22.536Z] Get "https://127.0.0.1:2746/api/v1/workflow-events/argo?listOptions.fieldSelector=metadata.name%3Dhello-world-hxrcp&listOptions.resourceVersion=0": x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs
Why am i seeing this error ?
The install process was this:
kubectl create namespace argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/stable/manifests/install.yaml
CLI (taken from the latest version here):
# Download the binary
curl -sLO https://github.com/argoproj/argo/releases/download/v3.0.7/argo-linux-amd64.gz
# Unzip
gunzip argo-linux-amd64.gz
# Make binary executable
chmod +x argo-linux-amd64
# Move binary to path
sudo mv ./argo-linux-amd64 /usr/local/bin/argo
# Test installation
argo version
# link with server
# recommended on user panel in interface
cat >> ~/.bashrc <<EOL
export ARGO_SERVER='127.0.0.1:2746'
export ARGO_HTTP1=true
export ARGO_SECURE=true
export ARGO_BASE_HREF=
export ARGO_TOKEN=''
export ARGO_NAMESPACE=argo
export ARGO_INSECURE_SKIP_VERIFY=true
EOL
# check it works:
argo list
Heyo, I ran into this issue when setting up with the argo helm chart on kind. The problem is that you have to disable tls verification for the executor (the thing that executes the workflow) using the ARGO_KUBELET_INSECURE env var. Here are the docs https://argoproj.github.io/argo-workflows/environment-variables/#executor
Sorry I don't have the exact code change you need for your setup, but I'm sure you can figure that out now that you know what the problem is ;).
Here's what my helm values.yaml file looks like in case that helps anyone else:
server:
serviceType: LoadBalancer
extraArgs:
- --auth-mode=server
controller:
containerRuntimeExecutor: k8sapi
executor:
env:
- name: ARGO_KUBELET_INSECURE
value: true
There is a web page with a large piece of text on it.
I want to configure the state to perform a certain action if curl returns an error.
If the variable doesn't contain 'StatusDescription : OK'
How can I set up a check for a piece of text that is inside a variable
{% set seqstat = salt['cmd.run']('powershell.exe curl http://127.0.0.1:5001 -UseBasicParsing') %}
{% if seqstat is sameas '*StatusDescription : OK*' %}
module_run:
cmd.run:
- name: 'powershell.exe WRITE-HOST have no Error'
{% else %}
module_run1:
cmd.run:
- name: 'powershell.exe WRITE-HOST have Error'
{%- endif -%}
Salt Version:
Salt: 3002.1
Dependency Versions:
cffi: 1.12.2
cherrypy: unknown
dateutil: 2.7.3
docker-py: 3.4.1
gitdb: 2.0.5
gitpython: 2.1.11
Jinja2: 2.10
libgit2: 0.27.7
M2Crypto: Not Installed
Mako: 1.0.7
msgpack-pure: Not Installed
msgpack-python: 0.5.6
mysql-python: 1.3.10
pycparser: 2.19
pycrypto: 2.6.1
pycryptodome: 3.6.1
pygit2: 0.27.4
Python: 3.7.3 (default, Jul 25 2020, 13:03:44)
python-gnupg: 0.4.4
PyYAML: 3.13
PyZMQ: 17.1.2
smmap: 2.0.5
timelib: Not Installed
Tornado: 4.5.3
ZMQ: 4.3.1
System Versions:
dist: debian 10 buster
locale: UTF-8
machine: x86_64
release: 4.19.0-6-amd64
system: Linux
version: Debian GNU/Linux 10 buster
I want to configure the state to perform a certain action if curl returns an error.
There is a Salt state called http which can query a URL and return the status. Using this (instead of curl) we can check for the status code(s) (200, 201, etc.), as well as matching text. Then we can use requisites to run subsequent states depending on the success/failure of the http.query.
Example:
I have added a check for status code of 200, you can omit - status: 200 if you don't care about the status code.
check-application:
http.query:
- name: http://127.0.0.1:5001
- status: 200
- match: 'StatusDescription : OK'
app-running:
cmd.run:
- name: 'powershell.exe WRITE-HOST have no Error'
- require:
- http: check-application
app-not-running:
cmd.run:
- name: 'powershell.exe WRITE-HOST have Error'
- onfail:
- http: check-application