I followed this tutorial as well as 2 others trying to host my project using Azure. https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-python-postgresql-app?tabs=bash#clone-the-sample-app I managed to host the sample web app used in the tutorial, but could not host my own project
**I keep getting "Server Error 500". I've spent around 36 hours trying to fix the problem.**
I checked the application logs - nothing
I checked the kudu/scm logs - nothing
I looked under "App Service logs" and checked the ftp logs - nothing
I checked to see if all the files had been uploaded at this location "<>.scm.azurewebsites.net/wwwroot/" The staticfiles successfully uploaded.
I went to "Web SSH" and installed all the dependencies** "pip install -r requirements.txt"
then did "python manage.py runserver" AND NO ERRORS, but it did not want to connect to "127.0.0.1:8000" or "localhost:8000" ???
I spend around 6 hours searching for answers - tried everything - nothing worked
WEBSITES_PORT set to 8000 (tried different ports and removed this setting after no luck)
I changed DEBUG to False and True - didn't work
I did set all the necessary environment variables (eg, DB_HOST, DB_PASSWORD ...)
The App Service plan is F1 (free)
I went to all the pages on my web app and got server error 500 on all the pages except when logging into admin, after logging into admin I got the error again.
Possible Solutions I thought might work
I might be missing an important "Application setting" ???
One of the dependencies might be causing the problem - but I highly doubt it
I dont know pls help sir
This was about what the logs kept saying
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
2020-06-24T08:28:13.331Z INFO - Starting container for site
2020-06-24T08:28:13.331Z INFO - docker run -d -p 5480:8000 --name forexflowcom_0_136ed024 -e WEBSITE_SITE_NAME=forexflowcom -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=forexflowcom.azurewebsites.net -e WEBSITE_INSTANCE_ID=9072c805cf2bc663ced034398777a5d5f6115a51e64a73b6fc69b73f64c8660e -e HTTP_LOGGING_ENABLED=1 appsvc/python:3.7_20200101.1
2020-06-24T08:28:16.751Z INFO - Initiating warmup request to container forexflowcom_0_136ed024 for site forexflowcom
2020-06-24T08:28:28.970Z INFO - Container forexflowcom_0_136ed024 for site forexflowcom initialized successfully and is ready to serve requests.
2020-06-24T09:34:28.003Z INFO - Starting container for site
2020-06-24T09:34:28.010Z INFO - docker run -d -p 5757:8000 --name forexflowcom_1_86357e3d -e WEBSITE_SITE_NAME=forexflowcom -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=forexflowcom.azurewebsites.net -e WEBSITE_INSTANCE_ID=9072c805cf2bc663ced034398777a5d5f6115a51e64a73b6fc69b73f64c8660e -e HTTP_LOGGING_ENABLED=1 appsvc/python:3.7_20200101.1
2020-06-24T09:34:31.507Z INFO - Initiating warmup request to container forexflowcom_1_86357e3d for site forexflowcom
2020-06-24T09:34:49.002Z INFO - Container forexflowcom_1_86357e3d for site forexflowcom initialized successfully and is ready to serve requests.
2020-06-24T09:38:04.238Z INFO - Starting container for site
2020-06-24T09:38:04.240Z INFO - docker run -d -p 7958:8000 --name forexflowcom_2_79f5bea0 -e WEBSITE_SITE_NAME=forexflowcom -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=forexflowcom.azurewebsites.net -e WEBSITE_INSTANCE_ID=9072c805cf2bc663ced034398777a5d5f6115a51e64a73b6fc69b73f64c8660e -e HTTP_LOGGING_ENABLED=1 appsvc/python:3.7_20200101.1
2020-06-24T09:38:08.317Z INFO - Initiating warmup request to container forexflowcom_2_79f5bea0 for site forexflowcom
2020-06-24T09:38:23.838Z INFO - Waiting for response to warmup request for container forexflowcom_2_79f5bea0. Elapsed time = 15.5210597 sec
2020-06-24T09:38:41.054Z INFO - Container forexflowcom_2_79f5bea0 for site forexflowcom initialized successfully and is ready to serve requests.
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
EDIT // EDIT // EDIT // EDIT // EDIT
I found The solution
in settings.py I had:
try:
from .local_settings import *
except ImportError:
print("No local file, your in production")
after removing this It worked
Related
I currently have an AWS server set up with docker to run the Keycloak docker container. For SSL/TLS, there is an AWS loadbalancer configured to point https/443 traffic to the container and have it receive it over 8080, terminating the encryption connection on said load balancer.
When creating the container with the following command, I am able to browse to and log into the keycloak service by browsing to the server's IP address.
docker run --name keycloak -v keybase-storage -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=TempAdminPassword jboss/keycloak However if I try to log into the server by browsing to the URL, I am redirected to the url http://default-host:8080/auth/admin/ and the browser showing a connection error page.
When trying to find a solution to this, I found how to pass java options to the container when it is first run, and using the resources from this page I used the following command to start the container(URL replaced for privacy concerns)
docker run --name keycloak -v keybase-storage -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=TempAdminPassword -e JAVA_OPTS_APPEND="-Dkeycloak.frontendUrl=https://sso.IntendedURL.com" jboss/keycloak However this yields the same results when trying to browse to the page.
The main clue I have to go off of right now is this line near the end of the previously shown docker run command, which reads as follows:
19:23:00,039 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 67) WFLYUT0021: Registered web context: '/auth' for server 'default-server'
What I believe I need to do now is to either change the config of the docker container after it has been created(have been unable to edit files using docker exec, so this is less likely) or to pass a java option into the run command when the container is first started.
Please let me know if you have any questions or if I can provide any other information.
Thank you.
Environment information:
Operating system
Amazon Linux 2
Docker version
19.03.13-ce, build 4484c46
Keycloak version
12.0.1(WildFly Core 13.0.3.Final)
When i try to run a local ESP then i get this error.
ERROR:Fetching service config failed(status code 403, reason Forbidden, url ***)
I have a new created service account this account works fine with gcloud cli.
System: OSX Sierra with Docker for MAC
this is the command that i use to start the container:
docker run -d --name="esp" --net="host" -v ~/Downloads:/esp gcr.io/endpoints-release/endpoints-runtime:1.0 -s 2017-02-07r5 -v echo.endpoints.****.cloud.goog -p 8082 -a localhost:9000 -k /esp/serviceaccount.json
UPDATE:
I have found the error i have set for the service name the verision and for the version the servicename.
Now i get no error but it not works, this is the console output from the container. From my view is all fine but it not works, i can't call the proxy with localhost:8082/***
INFO:Constructing an access token with scope https://www.googleapis.com/auth/service.management.readonly
INFO:Service account email: aplha-api#****.iam.gserviceaccount.com
INFO:Refreshing access_token
INFO:Fetching the service configuration from the service management service
nginx: [warn] Using trusted CA certificates file: /etc/nginx/trusted-ca-certificates.crt
This is the used correct command:
docker run -d --name="esp-user-api" --net="host" -v ~/Downloads:/esp gcr.io/endpoints-release/endpoints-runtime:1.0 -s echo.endpoints.***.cloud.goog -v 2017-02-07r5 -p 8082 -a localhost:9000 -k /esp/serviceaccount.json
Aron, I assume:
(1) you are following this user guide: https://cloud.google.com/endpoints/docs/running-esp-localdev
(2) And you do have a backend running on localhost:9000
Have you issued a curl request as suggested in that user guide to localhost:8082/***? does curl command get stuck or returns any error message?
If you don't have a local backend running yet, I would recommend you to follow the user guide above to run a local backend. Note this guide will instruct you to run it at port 8080, so you'll need to change your docker run command from "-a localhost:9000" to "-a localhost:8080" as well.
Also, please note this user guide is for linux env. We haven't tried this set up in a Mac env yet. We do notice some user gets this working on Windows docker with extra work, where he sets backend to "IP of docker NIC". Note "-a" is short for "--backend".
see https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/google-cloud-endpoints/4sRaSkigPiU/KY8g46NSBgAJ
My AWS server was recently rebooted, and on it I had my SVN repository hosted. The SVN path was like so:
svn://ec2-54-xxx-xx-xxx.us-east-2.compute.amazonaws.com/myproject
Now when I try to visit it with TortiseSVN I get:
Unable to connect to a repository at URL
'svn://ec2-54-xxx-xx-xxx.us-east-2.compute.amazonaws.com/myproject'
No repository found in
'svn://ec2-54-xxx-xx-xxx.us-east-2.compute.amazonaws.com/myproject'
When I get into my server I run the following
cd /home/svn/myproject
sudo /usr/bin/svnserve -d
Sure enough I see it running:
[ec2-user#ip-172-xxx-xx-xxx svn]$ ps -ef | grep svn
root 29145 1 0 21:02 ? 00:00:00 /usr/bin/svnserve -d
ec2-user 29157 29108 0 21:02 pts/4 00:00:00 grep svn
But my attempts to hit it fail, regardless. I have been using the svn:// before, but when I tried https:// it gave me Error running context: No connection could be made because the target machine actively refused it and http:// resulted in Redirect cycle detected for URL
Any suggestions on what I'm missing? I'm almost certain it's something simple and dumb, but I've been working it for over 60 minutes now.
when running the following command:
cmd /c C:\sonar-runner-2.4\bin\sonar-runner.bat
(sonar runner is installed on the build machine)
i get the following errors:
ERROR: Sonar server 'http://localhost:9000' can not be reached
ERROR: Error during Sonar runner execution
ERROR: java.net.ConnectException: Connection refused: connect
ERROR: Caused by: Connection refused: connect
what can cause these errors?
Hi dinesh,
this is my sonar-runner.properties file:
sonar.projectKey=NDM
sonar.projectName=NDM
sonar.projectVersion=1.0
sonar.visualstudio.solution=NDM.sln
#sonar.sourceEncoding=UTF-8
sonar.web.host:sonarqube
sonar.web.port=9000
# Enable the Visual Studio bootstrapper
sonar.visualstudio.enable=true
# Unit Test Results
sonar.cs.vstest.reportsPaths=TestResults/*.trx
# Required only when using SonarQube < 4.2
sonar.language=cs
sonar.sources=.
As you can see i set the sonar.web.host:sonarqube
sonar.web.port=9000 but when i run sonar-runner.bat i still get the
ERROR: Sonar server 'http://localhost:9000' can not be reached - why is it still looking for localhost:9000
and not sonarqube:9000 as i set?
i saw that in the log of sonar-runner.bat there the following line:
INFO: Work directory: D:\sTFS\26091\Sources\NDM\Source..sonar
while my solution is in D:\sTFS\26091\Sources\NDM\Source\
could this be the problem?
thanks,
Guy
If you use SonarScanner CLI with Docker, you may have this error because the SonarScanner container can not access to the Sonar UI container.
Note that you will have the same error with a simple curl from another container:
docker run --rm byrnedo/alpine-curl 127.0.0.1:9000
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
The solution is to connect the SonarScanner container to the same docker network of your sonar instance, for instance with --network=host:
docker run --network=host -e SONAR_HOST_URL='http://127.0.0.1:9000' --user="$(id -u):$(id -g)" -v "$PWD:/usr/src" sonarsource/sonar-scanner-cli
(other parameters of this command comes from the SonarScanner CLI documentation)
I got the same issue, and I changed to IP and it working well
Go to System References --> Network --> Advanced --> Open TCP/IP tabs --> copy the IPv4 Address.
change that IP instead localhost
Hope this can help
You should configure the sonar-runner to use your existing SonarQube server. To do so, you need to update its conf/sonar-runner.properties file and specify the SonarQube server URL, username, password, and JDBC URL as well. See https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner for details.
If you don't yet have an up and running SonarQube server, then you can launch one locally (with the default configuration) - it will bind to http://localhost:9000 and work with the default sonar-runner configuration. See https://docs.sonarqube.org/latest/setup/get-started-2-minutes/ for details on how to get started with the SonarQube server.
For others who ran into this issue in a project that is not using a sonar-runners.property file, you may find (as I did) that you need to tweak your pom.xml file, adding a sonar.host.url property.
For example, I needed to add the following line under the 'properties' element:
<sonar.host.url>https://sonar.my-internal-company-domain.net</sonar.host.url>
Where the url points to our internal sonar deployment.
For me the issue was that the maven sonar plugin was using proxy servers defined in the maven settings.xml. I was trying to access the sonarque on another (not localhost alias) and so it was trying to use the proxy server to access it. Just added my alias to nonProxyHosts in settings.xml and it is working now. I did not face this issue in maven sonar plugin 3.2, only after i upgraded it.
<proxy>
<id>proxy_id</id>
<active>true</active>
<protocol>http</protocol>
<host>your-proxy-host/host>
<port>your-proxy-host</port>
<nonProxyHosts>localhost|127.0.*|other-non-proxy-hosts</nonProxyHosts>
</proxy>enter code here
The issue occurred with me in a different way a little a while ago,
I had a docker container running normally in the main network of my host machine accessible via the browser on the normal localhost:9000. But whenever the scanner wants to connect to the server it couldn't despite being on the same network of the host.
I made sure they are, because on the docker run command I mentioned --network=bridge
So the trick was that I pointed to the actual local ip of mine instead of just writing localhost
you can know the ip of your machine by typing ipconfig on windows or ifconfig on linux
so on the scan docker run command I have pointed to the server like that -Dsonar.host.url=http://192.168.1.2:9000 where 192.168.1.2 is my local host address
That was my final docker commands to run the Server:
docker run -d --name sonarqube \
--network=bridge \
-p 9000:9000 \
-e SONAR_JDBC_USERNAME=<db username> \
-e SONAR_JDBC_PASSWORD=<db password>\
-v sonarqube_data:/opt/sonarqube/data \
-v sonarqube_extensions:/opt/sonarqube/extensions \
-v sonarqube_logs:/opt/sonarqube/logs \
sonarqube:community
and that's for the Scanner:
docker run \
--network=bridge \
-v "<local path of the project to scan>:/usr/src" sonarsource/sonar-scanner-cli \
-Dsonar.projectKey=<project key> \
-Dsonar.sources=. \
-Dsonar.host.url=http://<local-ip>:9000 \
-Dsonar.login=<token>
In the config file there is a colon instead of an equal sign after the sonar.web.host.
Is:
sonar.web.host:sonarqube
Should be
sonar.web.host=sonarqube
In sonar.properties file in conf folder I had hardcoaded ip of my machine where sobarqube was installed in property sonar.web.host=10.9 235.22 I commented this and it started working for me.
Please check if postgres(or any other database service) is running properly.
When you allow the 9000 port to firewall on your desired operating System the following error "ERROR: Sonar server 'http://localhost:9000' can not be reached" will remove successfully.In ubuntu it is just like as by typing the following command in terminal "sudo ufw allow 9000/tcp" this error will removed from the Jenkins server by clicking on build now in jenkins.
I have been trying to change the domain and have been running into issues. I hope someone can help me out here, I have documented the steps I went through below.
I requested the DNS resouuce from our dns admin team.
.mike-cf.company.com canonical name = mike-cf.company.com.
Name: mike-cf.company.com
Address: 10.52.88.123
I then installed with the -D switch:
bash < <(curl -s -k -B http ://raw.github.com/cloudfoundry/vcap/master/dev_setup/bin/vcap_dev_setup -D mike-cf.company.com)
I noticed that not all the config files in ~/cloudfoundry/.deployments/devbox/config changed, so I changed those using sed command:
$ cd ~/cloudfoundry/.deployments/devbox/config
$ sed -i 's/.vcap.me/.newdomain.com/g' *.yml
I restarted and things were looking good, I was able to run vmc target with no problem but I was not able to register a user.
$ vmc target http://api.mike-cf.company.com/
Successfully targeted to [http://api.mike-cf.company.com]
$ vmc register --email mike#company.com --passwd password
Creating New User: Error 100: Bad request
There was no entried in uaa.log only this in the cloud_controller.log
[2012-09-25 09:06:46.712110] cc - pid=20400 tid=8ee9 fid=4757 DEBUG -- ---> async\nrequest: post http://uaa.mike-cf.company.com/oauth/token\nheaders: {"content-type"=>"application/x-www-form-urlencoded", "accept"=>"application/json", "authorization"=>"Basic Y2xvdWRfY29udHJvbGxlcjpjbG91ZGNvbnRyb2xsZXJzZWNyZXQ="}\nbody: grant_type=client_credentials
[2012-09-25 09:06:46.718338] cc - pid=20400 tid=8ee9 fid=4757 DEBUG -- <---\nresponse: 404\nheaders: {"SERVER"=>"nginx", "DATE"=>"Tue, 25 Sep 2012 16:06:46 GMT", "CONTENT_TYPE"=>"text/html", "CONTENT_LENGTH"=>"162", "CONNECTION"=>"close"}\nbody: \r\n404 Not Foundhttp://uaa.mike-cf.company.com: 404 trace ["/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/bundler/gems/uaa-dad29c9030f4/gem/lib/uaa/http.rb:56:in json_parse_reply'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/bundler/gems/uaa-dad29c9030f4/gem/lib/uaa/token_issuer.rb:157:inrequest_token'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/bundler/gems/uaa-dad29c9030f4/gem/lib/uaa/token_issuer.rb:128:in client_credentials_grant'", "/home/mike/cloudfoundry/cloud_controller/cloud_controller/app/models/uaa_token.rb:80:inaccess_token'", "/home/mike/cloudfoundry/cloud_controller/cloud_controller/app/models/uaa_token.rb:96:in user_account_instance'", "/home/mike/cloudfoundry/cloud_controller/cloud_controller/app/controllers/users_controller.rb:13:increate'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/action_controller/metal/implicit_render.rb:4:in send_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/abstract_controller/base.rb:150:inprocess_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/action_controller/metal/rendering.rb:11:in process_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/abstract_controller/callbacks.rb:18:inblock in process_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/activesupport-3.0.14/lib/active_support/callbacks.rb:446:in `run_3844132275556875466__process_action_2824786929479189233_callbacks'"]
[2012-09-25 09:06:46.896386] cc_events - pid=20400 tid=8ee9 fid=4757 INFO -- [2012-09-25 09:06:46 -0700, :USER, "N/A", "POST:/users", "mike#company.com", :FAILED, "Bad request"]
I have found the problem, there seemed to be an issue with the vmc I was using. Once I downgraded VMC I am now able to connect.
gem uninstall vmc
gem install --version '= 0.3.18' vmc
Here is the thread that lead me to the answer
https://groups.google.com/a/cloudfoundry.org/forum/?fromgroups=#!topic/vcap-dev/enY2qKnSJWI
is it possible to see the content of the uaa config file? Make sure it has the correct IP address specified for the NATS message bus, the line should look something like this;
mbus: nats://nats:nats#192.168.1.10:4222/
If that IP address is incorrect, it needs to be changed. I take it the server it is installed on has a static IP address? was it assigned before you installed VCAP?