Openshift + django: 503 Service Unavailable + project root - django

I'm trying to set up openshift to publish my django project.
I created a scalable python3.3 app with django preinstalled and I added postgres9.2 cartridge.
I found the dirs structure quite complicated but in the end I noticed that the default example project was located under apps-root/runtime/repo/wsgi/openshift/ so I moved all files from this directory to a folder named 'backup' and I pasted here my project.
Now when I visit my site I get:
503 Service Unavailable
No server is available to handle this request.
I read that this can be due to HAproxy. I tried to restart my app through Openshift Online Web Interface but I still get the same error.
So:
1) How can I solve this issue?
2) How can I change the root folder of my project from apps-root/runtime/repo/wsgi/openshift/ to the root of my git repo so that I don't have unwanted folder (i.e. /wsgi/openshift/) in my local and bitbucket repo?
UPDATE:
looking at my logs I get:
==> python/logs/appserver.log <==
server = server_class((host, port), handler_class)
File "/opt/rh/python33/root/usr/lib64/python3.3/socketserver.py", line 430, in __init__
self.server_bind()
File "/opt/rh/python33/root/usr/lib64/python3.3/wsgiref/simple_server.py", line 50, in server_bind
HTTPServer.server_bind(self)
File "/opt/rh/python33/root/usr/lib64/python3.3/http/server.py", line 135, in server_bind
socketserver.TCPServer.server_bind(self)
File "/opt/rh/python33/root/usr/lib64/python3.3/socketserver.py", line 441, in server_bind
self.socket.bind(self.server_address)
OSError: [Errno 98] Address already in use
If I visit HAProxy status page in Express table "Server Status" is DOWN both in "local-gear" and "backend" rows.

I have the same issue and this was resolved after changing the haproxy.cfg.
option httpchk GET /
Comment out that line in haproxy.cfg or else set it to
option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
where www is your app link. See http://haproxy.1wt.eu/download/1.4/doc/configuration.txt for detail if you want to know more about haproxy configuration. Hope it works

If you want to build django yourself you may might want to check out this thread as I think it will help How to configure Django on OpenShift?
If you want to use something prebuilt then check out the django quickstart here https://www.openshift.com/quickstarts/django

Related

Error while installing the VCpp redistributalble uisng .ebextensions

I have a an ASP.NET Core web application which publishes to AWS using ElasticBeanstalk. In order to configure the windows environment I am using .ebextensions, which will copy the vcpp redistributables from S3 and installs them while creating the environment.
When published I am getting the error 'Error occurred during build: Command 01_instlVCx64 failed". Below is the command in my .ebextensions
files:
"c:\\vcpp-redistributables\\vc_redist_x64.exe":
source: https://<bucket_name>.s3.eu-west-2.amazonaws.com/vcpp-redistributables/vc_redist_x64.exe
authentication: S3Access
commands:
01_instlVCx64:
command: c:\\vcpp-redistributables\\vc_redist_x64.exe /q /norestart
Below is the trace back from the logs
2022-03-22 15:31:35,876 [ERROR] Error encountered during build of prebuild_0_GWebApp: Command 01_instlVCx64 failed
Traceback (most recent call last):
File "cfnbootstrap\construction.pyc", line 578, in run_config
File "cfnbootstrap\construction.pyc", line 146, in run_commands
File "cfnbootstrap\command_tool.pyc", line 127, in apply
cfnbootstrap.construction_errors.ToolError: Command 01_instlVCx64 failed
2022-03-22 15:31:35,876 [ERROR] -----------------------BUILD FAILED!------------------------
Could you please let me know what am I missing?
Thanks in advance.
Found the issue couple of days before. So, thought of answering my own question, so that it will be useful for others.
The issue is Elastic beanstalk instance (Windows server 2019) already has VCpp redistributables installed and which is later version that I am trying to install as part of .ebextensions. So, when I tried to install, it failed.
I figured it out, by enabling RDP connection on the EC2 instance that is created as part of Elastic beanstalk and run the scripts manually, which gave a detailed error message.
Hope it helps someone in the future.

Using Google Coud Run (or any cloud service) with Docker-compose app with simple python script

I have a relatively simple docker application using docker-compose that I would like to deploy. All it contains is a python script that I would like to automatically run every day that doesn't require user input.
I would like to use Google Cloud Run to deploy this application, since it doesn't need to be online 24/7. but I'm not sure if it's compatible with a docker-compose.yml.
Here is my docker-compose file:
version: '3.9'
secrets:
venmo_api_key:
file: ./secrets/venmo_api_token.txt
services:
app:
build: ./app
secrets:
- venmo_api_key
As you can see, I need docker-compose so that my secrets can be used in my container just by running docker-compose up. It runs fine locally!
To deploy to my image to Google Container Registry, I've run:
docker-compose build
docker tag cb3605 gcr.io/venmoscription-v2/venmoscription-service
docker push gcr.io/venmoscription-v2/venmoscription-service
In Google Cloud Run, I selected the GCR URL and left all the other options as default just to see if my container could run online. However, I got this error in Google Cloud Run Logs:
False for Revision venmoscription-service-00001-qig with message: Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I also got this error message after:
Traceback (most recent call last):
File "/app/main.py", line 13, in <module>
client = Client(access_token = get_docker_secret("venmo_api_key"))
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/venmo.py", line 15, in __init__
self.__profile = self.user.get_my_profile()
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/apis/user_api.py", line 26, in get_my_profile
response = self.__api_client.call_api(resource_path=resource_path,
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/utils/api_client.py", line 58, in call_api
return self.__call_api(resource_path=resource_path, method=method,
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/utils/api_client.py", line 103, in __call_api
processed_response = self.request(method, url, session,
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/utils/api_client.py", line 139, in request
validated_response = self.__validate_response(response, ok_error_codes=ok_error_codes)
File "/home/app/.local/lib/python3.9/site-packages/venmo_api/utils/api_client.py", line 170, in __validate_response
raise HttpCodeError(response=response)
venmo_api.models.exception.HttpCodeError: HTTP Status code is invalid. Could not make the request because -> 401 Unauthorized.
Basically, the container in Google Cloud Run was unable to access the secret that I defined in docker-compose.yml.
Does anyone know what I should be doing, or please explain how to get my docker-compose app up and running with a serverless solution? Thank you!!
Cloud Run doesn't support multiple containers like docker-compose does, so you'll need to deploy a single container that accomplishes your goal. Cloud Run does expect that your container starts up and listens on a port (like a web application) or else it won't start.
This page has a good step by step simple python app that can deploy on Cloud Run and listens on a port
https://cloud.google.com/run/docs/quickstarts/build-and-deploy/python
You can also make your venmo secret available at a path for your service by using Google Secret Manager.
https://cloud.google.com/run/docs/configuring/secrets
I hope that helps get you started.
Best,
Josh

AssertionError: INTERNAL: No default project is specified

New to airflow. Trying to run the sql and store the result in a BigQuery table.
Getting following error. Not sure where to setup the default_rpoject_id.
Please help me.
Error:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 28, in <module>
args.func(args)
File "/usr/local/lib/python2.7/dist-packages/airflow/bin/cli.py", line 585, in test
ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)
File "/usr/local/lib/python2.7/dist-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/airflow/models.py", line 1374, in run
result = task_copy.execute(context=context)
File "/usr/local/lib/python2.7/dist-packages/airflow/contrib/operators/bigquery_operator.py", line 82, in execute
self.allow_large_results, self.udf_config, self.use_legacy_sql)
File "/usr/local/lib/python2.7/dist-packages/airflow/contrib/hooks/bigquery_hook.py", line 228, in run_query
default_project_id=self.project_id)
File "/usr/local/lib/python2.7/dist-packages/airflow/contrib/hooks/bigquery_hook.py", line 917, in _split_tablename
assert default_project_id is not None, "INTERNAL: No default project is specified"
AssertionError: INTERNAL: No default project is specified
Code:
sql_bigquery = BigQueryOperator(
task_id='sql_bigquery',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
allow_large_results=True,
bql='''
#standardSQL
SELECT ID, Name, Group, Mark, RATIO_TO_REPORT(Mark) OVER(PARTITION BY Group) AS percent FROM `tensile-site-168620.temp.marks`
''',
destination_dataset_table='temp.percentage',
dag=dag
)
EDIT: I finally fixed this problem by simply adding the bigquery_conn_id='bigquery' parameter in the BigQueryOperator task, after running the code below in a separate python script.
Apparently you need to specify your project ID in Admin -> Connection in the Airflow UI. You must do this as a JSON object such as "project" : "".
Personally I can't get the webserver working on GCP so this is unfeasible. There is a programmatic solution here:
from airflow.models import Connection
from airflow.settings import Session
session = Session()
gcp_conn = Connection(
conn_id='bigquery',
conn_type='google_cloud_platform',
extra='{"extra__google_cloud_platform__project":"<YOUR PROJECT HERE>"}')
if not session.query(Connection).filter(
Connection.conn_id == gcp_conn.conn_id).first():
session.add(gcp_conn)
session.commit()
These suggestions are from a similar question here.
I get the same error when running airflow locally. My solution is to add a the following connection string as a environment variable:
AIRFLOW_CONN_BIGQUERY_DEFAULT="google-cloud-platform://?extra__google_cloud_platform__project=<YOUR PROJECT HERE>"
BigQueryOperator uses the "bigquery_default" connection. When not specified, local airflow uses an internal version of the connection which misses the property project_id. As you can see the connection string above provides the project_id property.
On startup Airflow loads environment variables that start with "AIRFLOW_" into memory. This mechanism can be used to override airflow properties and providing connections when running locally, as explained in the airflow documentation here. Note this also works when running airflow directly without starting the web server.
So I have set up environments variables for all my connections, for example AIRFLOW_CONN_MYSQL_DEFAULT. I have put them into a .ENV file that get sourced from my IDE, but putting them into your .bash_profile would work fine too.
When you look inside your airflow instance on Cloud Composer, you see that the at the "bigquery_default" connection there has the project_idproperty set. That's why BigQueryOperator works when running through Cloud Composer.
(I am on airflow 1.10.2 and BigQuery 1.10.2)

404 Page not found while deploying MicroStrategy Web sample

I am a newbie to MicroStrategy. I have managed to install MicroStrategy and Install the sample (MSTR-SDK/samples/javaExternalSecuritySample) in MicroStrategy Root directory /web aspx/plugin directory.
I am still getting below error when I launch the MicrosStrategy web tool.
HTTP Error 404.0 - Not Found
The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
How do I fix this?
Can you browse to the file on your webserver at the location indicated (inetpub\wwwroot\plugins\ESM\JSP\mstrWeb.jsp) ?

Boto.conf not found

I am running a flask app on an AWS EC2 server, and have been using boto to access data stored in dynamoDB. After accidentally adding boto.conf to a git commit (and push and pull on the server), I have found that my python code can no longer locate the boto.conf file. I rolled back the changes with git, but the problem remains.
The python module and boto.conf file exist in the same directory, but when the module calls
boto.config.load_credential_file('boto.conf')
I get the flask error IOError: [Errno 2] No such file or directory: 'boto.conf'.
As per Documentation:
I'm not really sure why you are using boto.config_load_credential_file. In general you can pick up the config in a file called either ~/.boto or /etc/boto.cfg.
You can also look at this questions from SO that also answers how to get the configuration for boto: Getting Credentials File in the boto.cfg for Python