What are the other heroku environment variables? - web-services

When setting up a server, I noticed that the environment variable process.env.PORT is used. Are there any other variables like this? Where can I see all of them?

The following command will display all of the environment variables, not just those visible from heroku config:
heroku run printenv

heroku config does not show PORT. So, it's incomplete if you need everything. This will create a one-off dyno and show everything.
From here: https://devcenter.heroku.com/articles/getting-started-with-nodejs#console
Run a console in a one-off dyno, then at the > prompt, type "console.log(process.env)":
$ heroku run node
Running `node` attached to terminal... up, run.4778
> console.log(process.env
... )
{ BUILDPACK_URL: 'https://github.com/MichaelJCole/heroku-buildpack-nodejs.git#wintersmith',
TERM: 'xterm',
SENDGRID_USERNAME: 'unicorns#heroku.com',
COLUMNS: '80',
DYNO: 'run.4778',
PATH: '/app/bin:/app/node_modules/.bin:bin:node_modules/.bin:/usr/local/bin:/usr/bin:/bin',
PWD: '/app',
PS1: 'fairydust',
LINES: '22',
SHLVL: '1',
HOME: '/app',
SENDGRID_PASSWORD: 'ponies',
PORT: '52031',
_: '/app/bin/node' }
undefined

The command is
heroku config
You can read more here https://devcenter.heroku.com/articles/config-vars

See https://devcenter.heroku.com/articles/config-vars: there's a command that appears to tell you what your environment variables are.
$ heroku config
See if that works for you.
EDIT: it appears the heroku docs linked above are wrong. Try this:
$ heroku config -s --app <appname>

Here are 100% of my environment variables for a working Node.JS app.
The documentation on heroku is pretty shitty for this.
You'd expect they'd have something like what google
app engine reference has:
https://cloud.google.com/appengine/docs/standard/nodejs/runtime
But, since they don't , my solution was just to create
a simple "rest" endpoint that logs out all of the
environment variables. Don't do this in a serious application.
Instead I'd use "Michael Cole" 's logging method.
Please don't hack me. This project won't exist after November 2022 because heroku will no longer be free. So I'll risk it. Currently porting my code to "GoogleAppEngine".
HEROKU_EXEC_URL :
https://exec-manager.heroku.com/370aa52e-ced2-4ad1-9db7-b11f98f8a7fd
DATABASE_URL ::::
postgres://amrspwutkevecg:4958212525b67f2ee7a0f49c0c465f65da4c3e352880f5999f8ea6fac63a4cf5#ec2-34-200-35-222.compute-1.amazonaws.com:5432/ddb2djh5t66gqj
npm_config_user_agent :
npm/8.19.2 node/v16.18.0 linux x64 workspaces/false
JAVA_TOOL_OPTIONS :
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=1098
-Dcom.sun.management.jmxremote.rmi.port=1099
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.local.only=true
-Djava.rmi.server.hostname=172.17.42.66
-Djava.rmi.server.port=1099
npm_node_execpath :
/app/.heroku/node/bin/node
SHLVL : 0
npm_config_noproxy : <EMPTY STRING>
PORT : 19842
HOME : /app
npm_package_json : /app/package.json
PS1 : \[\033[01;34m\]\w\[\033[00m\] \[\033[01;32m\]$ \[\033[00m\]
npm_config_userconfig : /app/.npmrc
npm_config_local_prefix : /app
COLOR : 0
npm_config_metrics_registry : https://registry.npmjs.org/
_ : /app/.heroku/node/bin/npm
npm_config_prefix : /app/.heroku/node
WEB_CONCURRENCY : 1
npm_config_cache : /app/.npm
npm_config_node_gyp : /app/.heroku/node/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js
PATH : /app/node_modules/.bin:/node_modules/.bin
:/app/.heroku/node/lib/node_modules/npm/node_modules/#npmcli/run-script/lib/node-gyp-bin
:/app/.heroku/node/bin
:/app/.heroku/yarn/bin
:/usr/local/bin:/usr/bin
:/bin:/app/bin:/app/node_modules/.bin
NODE :/app/.heroku/node/bin/node
MEMORY_AVAILABLE : 512
NODE_HOME : /app/.heroku/node
HEROKU_JMX_OPTIONS :
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=1098
-Dcom.sun.management.jmxremote.rmi.port=1099
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.local.only=true
-Djava.rmi.server.hostname=172.17.42.66
-Djava.rmi.server.port=1099
HEROKU_APP_ID : 265115d9-6eb1-4352-a83c-05b844ece512
npm_lifecycle_script : node ./ATOMIC_IVY_MMO.JS
npm_lifecycle_event : start
npm_config_globalconfig : /app/.heroku/node/etc/npmrc
npm_config_init_module : /app/.npm-init.js
PWD : /app
npm_execpath : /app/.heroku/node/lib/node_modules/npm/bin/npm-cli.js
npm_config_global_prefix : /app/.heroku/node
npm_command : start
NODE_ENV : production
WEB_MEMORY : 512
DYNO : web.1
INIT_CWD : /app
EDITOR : vi
Environment variables added after running :
heroku labs:enable runtime-dyno-metadata -a <app name>
Are non-standard , but can be useful. Especially "HEROKU_APP_NAME"
which you can use to allow your client-side app to make XMLHTTPRequests
to your server's API.
HEROKU_APP_ID unique identifier for the application.
"9daa2797-e49b-4624-932f-ec3f9688e3da"
HEROKU_APP_NAME application name.
"example-app"
HEROKU_DYNO_ID dyno identifier. This metadata is not yet available in Private Spaces nor the Container Registry.
"1vac4117-c29f-4312-521e-ba4d8638c1ac"
HEROKU_RELEASE_CREATED_AT time and date the release was created.
"2015-04-02T18:00:42Z"
HEROKU_RELEASE_VERSION identifier for the current release.
"v42"
HEROKU_SLUG_COMMIT commit hash for the current release.
"2c3a0b24069af49b3de35b8e8c26765c1dba9ff0"
HEROKU_SLUG_DESCRIPTION description of the current release.
"Deploy 2c3a0b2"
Documentation for runtime medata data :
https://devcenter.heroku.com/articles/dyno-metadata

Related

Gunicorn ModuleNotFound : No module named 'core' when Deployment Django

I try to deployment my apps.
So when i try to run this command
gunicorn -c conf/gunicorn_config.py core.wsgi
Error : ModuleNotFound : No module named 'core'
This is my directory
home/uletin
--- conf
--------- ...
--------- gunicorn_config.py
--- env
--- graduates
------...
------core
------------ ...
------------ settings.py
------------ wsgi.py
------------ ...
------manage.py
------static
in gunicorn_config.py like this
command = '/home/uletin/env/bin/gunicorn'
pythonpath = 'home/uletin/graduates'
bind = '165.22.98.56:8000'
workers = 3
Your issue is the fact that the project name, and the main app inside it have different names.
You said that you've renamed the main folder to graduates. What I would suggest is either change the name back to core and then try executing your command. It should work. Gunicorn expects that the wsgi.py file will be present in a folder that has the same name as the project directory. If you're going to be updating the name back to core, you'll need to update your pythonpath variable as well
The other thing you can try, if you want to keep the name graduates is, rename the inner core folder to graduates as well, and modify your command so that it says this:
gunicorn -c conf/gunicorn_config.py graduates.wsgi

Passing environment variables to docker from GitLab CI/CD job failing

I am having issues passing variables which are defined in the GitLab ci file to my docker file
My GitLab CI file looks like this
variables:
IMAGE : "openjdk"
IMAGE_TAG : "11-slim"
docker-image:
extends: .build
variables:
DOCKER_IMAGE_VERSION : ${JDK_IMAGE}:${JDK_IMAGE_TAG}
My Docker file looks a bit like this:
# --- STAGE 1 ----------------------------------------------------------------
# Getting ARGS for build
ARG DOCKER_IMAGE_VERSION
# Start with a base image containing Java runtime
FROM ${DOCKER_IMAGE_VERSION} as build
Now i am getting the following error when the pipeline starts the docker build:
Step 1/7 : ARG DOCKER_IMAGE_VERSION
Step 2/7 : FROM ${DOCKER_IMAGE_VERSION} as build
base name (${DOCKER_IMAGE_VERSION}) should not be blank
Can someone help point me where i am going wrong?
Thanks
consider defining global ARG's and override it when you build.
example
ARG sample_TAG=test
ARG sample_TAG
WORKDIR /opt/sample-test
RUN echo "image tag is ${sample_TAG}"
FROM $sample_TAG
VOLUME /opt
RUN mkdir /opt/sample-test

gsutil command crashes every time on Windows 10

Whenever I run some gsutil command, for example gsutil components update, it exits with this error:
ERROR: gcloud crashed (LookupError): unknown encoding: cp65001
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
Running gloud info --diagnostics as it suggests also fails with the same error:
Network diagnostic detects and fixes local network connection issues.
Checking network connection...failed.
ERROR: gcloud crashed (LookupError): unknown encoding: cp65001
Does anybody know how to fix this?
I've tried setting PYTHONIOENCODING=UTF-8 (Python 2.7 : LookupError: unknown encoding: cp65001) but it didn't help, I think gsutil uses its own Python and it might be ignoring/resetting this variable.
Edit:
I'm using Powershell, it already has UTF-8 set as the encoding:
[Console]::OutputEncoding
BodyName : utf-8
EncodingName : Unicode (UTF-8)
HeaderName : utf-8
WebName : utf-8
WindowsCodePage : 1200
IsBrowserDisplay : True
IsBrowserSave : True
IsMailNewsDisplay : True
IsMailNewsSave : True
IsSingleByte : False
EncoderFallback : System.Text.EncoderReplacementFallback
DecoderFallback : System.Text.DecoderReplacementFallback
IsReadOnly : True
CodePage : 65001
Reinstalling the Cloud SDK with "Bundled Python" unchecked did the trick for me. I have Python 2.7 installed independently.
Just run:
set PYTHONIOENCODING=UTF-8

Why am I getting an unsatisfiable error when trying to install pypyodbc using conda?

I'm trying to install the pypyodbc package, but I'm running into an error:
(base) C:\>conda install -c zegami pypyodbc
Solving environment: failed
UnsatisfiableError: The following specifications were found to be in conflict:
- backports.functools_lru_cache
- pypyodbc
Use "conda info <package>" to see the dependencies for each package.
I don't understand what the problem is here, or why backports.functools_lru_cache would have any impact. I've also tried different pypyodbc contributions, including one from CIT and mbonix. The specific package causing the error in each case is different (urllib3 and futures, respectively), but the result is the same. In any event, here is the output from conda info. I'd appreciate any help I can get!
Thanks,
Brad
(base) C:\>conda info
active environment : base
active env location : C:\ProgramData\Anaconda2
shell level : 1
user config file : C:\Users\braddavi\.condarc
populated config files : C:\Users\braddavi\.condarc
conda version : 4.5.4
conda-build version : 3.10.5
python version : 2.7.15.final.0
base environment : C:\ProgramData\Anaconda2 (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/win-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/pro/win-64
https://repo.anaconda.com/pkgs/pro/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : C:\ProgramData\Anaconda2\pkgs
C:\Users\braddavi\AppData\Local\conda\conda\pkgs
envs directories : C:\ProgramData\Anaconda2\envs
C:\Users\braddavi\AppData\Local\conda\conda\envs
C:\Users\braddavi\.conda\envs
platform : win-64
user-agent : conda/4.5.4 requests/2.18.4 CPython/2.7.15 Windows/201
2ServerR2 Windows/6.3.9600
administrator : True
netrc file : None
offline mode : False
(base) C:\>

Virtualenv have multiple possible locations

A colleague of mine implement a shell script with the following line
output="$(venv/bin/python manage.py has_missing_migrations --quiet --settings=project.tests_settings 2>&1)"
Here is the full code :
# Check missing migrations
output="$(venv/bin/python manage.py has_missing_migrations --quiet --settings=project.tests_settings 2>&1)"
[ $? -ne 0 ] \
&& ipoopoomypants "Migrations" "$output" \
|| irock "Migrations"
If I run the script, I obtain
Running pre-commit checks:
[OK] anonymize_db ORM queries
[OK] Forbidden Python keywords
[OK] Forbidden JavaScript keywords
[OK] Forbidden HTML keywords
[FAIL] Migrations
COMMIT REJECTED: .git/hooks/pre-commit: line 88: venv/bin/python: No such file or directory
The problem with the above line is it takes into account that the virtual environment has been created inside the project itself. However, it is not always the case. From what I am concerned, I work with virtualenvwrapper. Hence, my virtualenv is not ./venv, but well in ~/.virtualenvs/venv.
Question : How could I modify the above line in such a way it will consider both path ./venv and ~/.virtualenvs/venv?
You should probably use the WORKON_HOME environment variable to point the the location of virtualenvs instead of hard-coding it.