In flatpak manifest, how to install file only if it exists? - flatpak

I'm writing a small Gnome builder application and I want to add .env file to the build, but I want this file to be optional.
Right now my flatpak manifest looks like this. It works if the .env file is present, but fails if .env is not there:
{
"name" : "my-app",
"builddir" : true,
"buildsystem" : "meson",
"build-commands" : [
"install -D .env /app/share/my-app/my_app/.env"
],
"sources" : [
{
"type" : "file",
"path" : ".env"
},
{
"type" : "git",
"url" : "file:///home/user/Projects/my-app"
}
]
}
I tried changing build-commands to check for file like this, but that causes the build to fail:
...
"build-commands" : [
"test -f .env && install -D .env /app/share/my-app/my_app/.env"
]
...
How can I include the file only if it is present?

Related

Artifactory jfrog - download artifact with regex and exclude

I'm just trying to download every artifact for example:
maven-dsd-snapshot-local/com/dsds/aem/tenants/dcihub/dcihub-wrapper/1221.1.0-SNAPSHOT
/something-wrapper-2023.1.0-20230206.113149-31.zip
but NOT
maven-dsd-snapshot-local/com/dsds/aem/platform/platform-wrapper/2023.1.0-SNAPSHOT/platform-wrapper-2023.1.0-20230206.113149-51.zip
That is what I'm trying to do in Jenkins using Artifactory plugin:
Artifactory_BUILD_PATH = """{
"files": [
{
"pattern": "${repo}/(?!.*platform-wrapper).*-wrapper/.*.zip",
"target": "/tmp/artifacts/",
"flat": "true",
"build": "${buildName}/LATEST"
}
]
}"""
However when I do that I get:
java.lang.ArrayIndexOutOfBoundsException
With negative regex this works and match correctly all the wrapper paths:
Artifactory_BUILD_PATH = """{
"files": [
{
"pattern": "${repo}/*-wrapper/*.zip",
"target": "/tmp/artifacts/",
"flat": "true",
"build": "${buildName}/LATEST"
}
]
}"""
END GOAL:
Match all paths that have wrapper in it, but exclude platform-wrapper.
The download command only supports wildcards. It does not support regular expressions.
You can make use of the exclusions field in order to exclude certain paths.
See the File Specs documentation for more details.

Script exited with non-zero exit status: 100.Allowed exit codes are: [0] : packer error [duplicate]

I have a shell provisioner in packer connected to a box with user vagrant
{
"environment_vars": [
"HOME_DIR=/home/vagrant"
],
"expect_disconnect": true,
"scripts": [
"scripts/foo.sh"
],
"type": "shell"
}
where the content of the script is:
whoami
sudo su
whoami
and the output strangely remains:
==> virtualbox-ovf: Provisioning with shell script: scripts/configureProxies.sh
virtualbox-ovf: vagrant
virtualbox-ovf: vagrant
why cant I switch to the root user?
How can I execute statements as root?
Note, I do not want to quote all statements like sudo "statement |foo" but rather globally switch user like demonstrated with sudo su
You should override the execute_command. Example:
"provisioners": [
{
"execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E sh -eux '{{.Path}}'",
"scripts": [
"scripts/foo.sh"
],
"type": "shell"
}
],
There is another solution with simpler usage of 2 provisioner together.
Packer's shell provisioner can run the bash with sudo privileges. First you need copy your script file from local machine to remote with file provisioner, then run it with shell provisioner.
packer.json
{
"vars": [...],
"builders": [
{
# ...
"ssh_username": "<some_user_other_than_root_with_passwordless_sudo>",
}
],
"provisioners": [
{
"type": "file",
"source": "scripts/foo.sh",
"destination": "~/shell.tmp.sh"
},
{
"type": "shell",
"inline": ["sudo bash ~/shell.tmp.sh"]
}
]
}
foo.sh
# ...
whoami
sudo su root
whoami
# ...
output
<some_user_other_than_root_with_passwordless_sudo>
root
After provisioner complete its task, you can delete the file with shell provisioner.
packer.json updated
{
"type": "shell",
"inline": ["sudo bash ~/shell.tmp.sh", "rm ~/shell.tmp.sh"]
}
one possible answer seems to be:
https://unix.stackexchange.com/questions/70859/why-doesnt-sudo-su-in-a-shell-script-run-the-rest-of-the-script-as-root
sudo su <<HERE
ls /root
whoami
HERE
maybe there is a better answer?
Assuming that the shell provisioner you are using is a bash script, you can add my technique to your script.
function if_not_root_rerun_as_root(){
install_self
if [[ "$(id -u)" -ne 0 ]]; then
run_as_root_keeping_exports "$0" "$#"
exit $?
fi
}
function run_as_root_keeping_exports(){
eval sudo $(for x in $_EXPORTS; do printf '%s=%q ' "$x" "${!x}"; done;) "$#"
}
export EXPORTS="PACKER_BUILDER_TYPE PACKER_BUILD_NAME"
if_not_root_rerun_as_root "$#"
There is a pretty good explanation of "$#" here on StackOverflow.

Running a shell script in CloudFormation cfn-init

I am trying to run a script in the cfn-init command but it keeps timing out.
What am I doing wrong when running the startup-script.sh?
"WebServerInstance" : {
"Type" : "AWS::EC2::Instance",
"DependsOn" : "AttachGateway",
"Metadata" : {
"Comment" : "Install a simple application",
"AWS::CloudFormation::Init" : {
"config" : {
"files": {
"/home/ec2-user/startup_script.sh": {
"content": {
"Fn::Join": [
"",
[
"#!/bin/bash\n",
"aws s3 cp s3://server-assets/startserver.jar . --region=ap-northeast-1\n",
"aws s3 cp s3://server-assets/site-home-sprint2.jar . --region=ap-northeast-1\n",
"java -jar startserver.jar\n",
"java -jar site-home-sprint2.jar --spring.datasource.password=`< password.txt` --spring.datasource.username=`< username.txt` --spring.datasource.url=`<db_url.txt`\n"
]
]
},
"mode": "000755"
}
},
"commands": {
"start_server": {
"command": "./startup_script.sh",
"cwd": "~",
}
}
}
}
},
The file part works fine and it creates the file but it times out at running the command.
What is the correct way of executing a shell script?
You can tail the logs in /var/log/cfn-init.log and detect the issues while running the script.
The commands in Cloudformation Init are ran as sudo user by default. Maybe there can be an issue were your script is residing in /home/ec2-user/ and you are trying to run the script from '~' (i.e. /root).
Please give the absolute path (/home/ec2-user) in cwd. It will solve your concern.
However, the exact issue can be fetched from the logs only.
Usually the init scripts are executed by root unless specified otherwise. Can you try giving the full path while running your startup script. You can give cloudkast a try. It is an online cloudformation template generator. Makes easier creating objects such as aws::cloudformation::init.

Cant get cloudformation template to install apps during deployment

I have a cloudformation template, to deploy a windows server and run some powershell commands. I can get the server to deploy, but none of my powershell commands seem to run. They are getting passed over.
I have been focusing on cnit to install my apps, no luck
{
"AWSTemplateFormatVersion":"2010-09-09",
"Description":"CHOCO",
"Resources":{
"MyEC2Instance1":{
"Type":"AWS::EC2::Instance",
"Metadata" : {
"AWS::CloudFormation::Init": {
"configSet" : {
"config" : [
"extract",
"prereq",
"install"
]
},
"extract" : {
"command" : "powershell.exe -Command Set-ExecutionPolicy -
Force remotesigned"
},
"prereq" : {
"command" : "powershell.exe -Command Invoke-WebRequest -
Uri https://xxxxx.s3.us-east-2.amazonaws.com/chocoserverinstall.ps1 -
OutFile C:chocoserverinstall.ps1"
},
"install" : {
"command" : "powershell.exe -File chocoserverinstall.ps1"
}
}
},
"Properties":{
"AvailabilityZone":"us-east-1a",
"DisableApiTermination":false,
"ImageId":"ami-06bee8e1000e44ca4",
"InstanceType":"t3.medium",
"KeyName":"xxx",
"SecurityGroupIds":[
"sg-01d044cb1e6566ef0"
],
"SubnetId":"subnet-36c3a56b",
"Tags":[
{
"Key":"Name",
"Value":"CHOCOSERVER"
},
{
"Key":"Function",
"Value":"CRISPAPPSREPO"
}
],
"UserData":{
"Fn::Base64":{
"Fn::Join":[
"",
[
"<script>\n",
"cfn-init.exe -v ",
" --stack RDSstack",
" --configsets config ",
" --region us-east-1",
"\n",
"<script>"
]]}
}
}
}
}
}
I'm excepting cloudformation to run thru my metadata commands when provisioning this template
The cfn-init command requires the -c or --configsets command to specify "a comma-separated list of configsets to run (in order)".
See:
cfn-init - AWS CloudFormation
AWS::CloudFormation::Init - AWS CloudFormation

Crossbar.io configure WSGI for Django app

I am experimenting with Crossbar.io 0.10.4 and Django 1.6.11, trying to follow the example here. The code shows you can configure Crossbar.io to serve up the Django app at "/" -- but when I try that in my configuration, I get a Python import error:
ApplicationError: ApplicationError('crossbar.error.invalid_configuration', args = (u"WSGI app module 'apache/django.wsgi' import failed: Import by filename is not supported. - Python search path was ....
My config.json is here:
{
"controller": {
},
"workers": [
{
"type": "router",
"realms": [
{
"name": "backstage-producer",
"roles": [
{
"name": "anonymous",
"permissions": [
{
"uri": "*",
"publish": false,
"subscribe": true,
"call": false,
"register": false
}
]
}
]
}
],
"transports": [
{
"type": "web",
"endpoint": {
"type": "tcp",
"port": 8080
},
"paths": {
"/": {
"type": "wsgi",
"module": "apache/django.wsgi",
"object": "application"
},
"ws": {
"type": "websocket",
"debug": false
},
"notify": {
"type": "publisher",
"realm": "backstage-producer",
"role": "anonymous"
},
"static": {
"type": "static",
"directory": "../static"
}
}
}
]
}
]
}
Where the Python paths searched do not include my Django project directory. Typically I append my specific project directories to sys.path in my wsgi file, but apparently that workflow doesn't work with Crossbar.io. Trying a relative import fails (need to specify "package" argument) as does full path (same import by filename error as above).
Removing the definition for "/" does not work, because Crossbar.io complains that it must be defined.
How can I set this up properly with Crossbar.io? My apache/django.wsgi file is below, for reference:
ALLDIRS = ['/usr/local/pythonenv/myapp/lib/python2.6/site-packages']
import os
import sys
import site
# from https://code.google.com/p/modwsgi/wiki/VirtualEnvironments
sys.path.insert(0, '/var/www/myapp/myapp_main/')
sys.path.insert(1, '/var/www/myapp/')
prev_sys_path = list(sys.path)
for directory in ALLDIRS:
site.addsitedir(directory)
new_sys_path = []
for item in list(sys.path):
if item not in prev_sys_path:
new_sys_path.append(item)
sys.path.remove(item)
sys.path[:0] = new_sys_path
os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp_main.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
----UPDATE 1------
Per Daniel's suggestion, I changed the file to just wsgi.py and my config to use the Python module path, not the filename / directory path. Config then looked like this:
"paths": {
"/": {
"type": "wsgi",
"module": "apache.wsgi",
"object": "application"
},
Throws the same exception:
ApplicationError: ApplicationError('crossbar.error.invalid_configuration', args = (u"WSGI app module 'apache.wsgi' import failed: No module named apache.wsgi - Python search path was
My directory structure is:
Project
|- apache
| |-__init__.py
| |-wsgi.py
|-.crossbar
|-config.json
-------UPDATE 2-------
The only solution (read "hack") I have found is to hard-code my project path into crossbar/worker/router.py so that it is included in the Python search path list:
sys.path.insert(0, '/var/www/myapp/myapp_main/')
sys.path.insert(1, '/var/www/myapp/')
Seems like there should be a better way...
The error is telling you that you have a file path in the setting that points to your WSGI file, whereas you need a Python module path. Your WSGI file should actually be a file called "wsgi.py" inside your project directory (which presumably is "apache", which is a strange name for a project that explicitly is not using Apache).
"/": {
"type": "wsgi",
"module": "apache.wsgi",
"object": "application"
},
Update So I found the config docs at last: they really don't go out of their way to make it easy, like actually providing an index. Oh well.
It looks like you can provide an options hash to the router configuration including a pythonpath setting:
"workers": [
{
"type": "router",
"options": {
"pythonpath": ["/var/myapp/myapp_main/", "/var/myapp"]
},
...
"transports": {
...