Change Domain Problems - cloud-foundry

I have been trying to change the domain and have been running into issues. I hope someone can help me out here, I have documented the steps I went through below.
I requested the DNS resouuce from our dns admin team.
.mike-cf.company.com canonical name = mike-cf.company.com.
Name: mike-cf.company.com
Address: 10.52.88.123
I then installed with the -D switch:
bash < <(curl -s -k -B http ://raw.github.com/cloudfoundry/vcap/master/dev_setup/bin/vcap_dev_setup -D mike-cf.company.com)
I noticed that not all the config files in ~/cloudfoundry/.deployments/devbox/config changed, so I changed those using sed command:
$ cd ~/cloudfoundry/.deployments/devbox/config
$ sed -i 's/.vcap.me/.newdomain.com/g' *.yml
I restarted and things were looking good, I was able to run vmc target with no problem but I was not able to register a user.
$ vmc target http://api.mike-cf.company.com/
Successfully targeted to [http://api.mike-cf.company.com]
$ vmc register --email mike#company.com --passwd password
Creating New User: Error 100: Bad request
There was no entried in uaa.log only this in the cloud_controller.log
[2012-09-25 09:06:46.712110] cc - pid=20400 tid=8ee9 fid=4757 DEBUG -- ---> async\nrequest: post http://uaa.mike-cf.company.com/oauth/token\nheaders: {"content-type"=>"application/x-www-form-urlencoded", "accept"=>"application/json", "authorization"=>"Basic Y2xvdWRfY29udHJvbGxlcjpjbG91ZGNvbnRyb2xsZXJzZWNyZXQ="}\nbody: grant_type=client_credentials
[2012-09-25 09:06:46.718338] cc - pid=20400 tid=8ee9 fid=4757 DEBUG -- <---\nresponse: 404\nheaders: {"SERVER"=>"nginx", "DATE"=>"Tue, 25 Sep 2012 16:06:46 GMT", "CONTENT_TYPE"=>"text/html", "CONTENT_LENGTH"=>"162", "CONNECTION"=>"close"}\nbody: \r\n404 Not Foundhttp://uaa.mike-cf.company.com: 404 trace ["/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/bundler/gems/uaa-dad29c9030f4/gem/lib/uaa/http.rb:56:in json_parse_reply'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/bundler/gems/uaa-dad29c9030f4/gem/lib/uaa/token_issuer.rb:157:inrequest_token'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/bundler/gems/uaa-dad29c9030f4/gem/lib/uaa/token_issuer.rb:128:in client_credentials_grant'", "/home/mike/cloudfoundry/cloud_controller/cloud_controller/app/models/uaa_token.rb:80:inaccess_token'", "/home/mike/cloudfoundry/cloud_controller/cloud_controller/app/models/uaa_token.rb:96:in user_account_instance'", "/home/mike/cloudfoundry/cloud_controller/cloud_controller/app/controllers/users_controller.rb:13:increate'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/action_controller/metal/implicit_render.rb:4:in send_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/abstract_controller/base.rb:150:inprocess_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/action_controller/metal/rendering.rb:11:in process_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/abstract_controller/callbacks.rb:18:inblock in process_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/activesupport-3.0.14/lib/active_support/callbacks.rb:446:in `run_3844132275556875466__process_action_2824786929479189233_callbacks'"]
[2012-09-25 09:06:46.896386] cc_events - pid=20400 tid=8ee9 fid=4757 INFO -- [2012-09-25 09:06:46 -0700, :USER, "N/A", "POST:/users", "mike#company.com", :FAILED, "Bad request"]

I have found the problem, there seemed to be an issue with the vmc I was using. Once I downgraded VMC I am now able to connect.
gem uninstall vmc
gem install --version '= 0.3.18' vmc
Here is the thread that lead me to the answer
https://groups.google.com/a/cloudfoundry.org/forum/?fromgroups=#!topic/vcap-dev/enY2qKnSJWI

is it possible to see the content of the uaa config file? Make sure it has the correct IP address specified for the NATS message bus, the line should look something like this;
mbus: nats://nats:nats#192.168.1.10:4222/
If that IP address is incorrect, it needs to be changed. I take it the server it is installed on has a static IP address? was it assigned before you installed VCAP?

Related

Running code server on gcp cloud shell gives error when previewing

I'm trying to run code-server on gcp cloud shell. I downloaded the following version
https://github.com/cdr/code-server/releases/download/v3.9.2/code-server-3.9.2-linux-amd64.tar.gz, which I think is the correct one, extracted the contents and ran
code-server --auth none
This gave the following output
[2021-04-06T00:53:21.728Z] info code-server 3.9.2 109d2ce3247869eaeab67aa7e5423503ec9eb859
[2021-04-06T00:53:21.730Z] info Using user-data-dir ~/.local/share/code-server
[2021-04-06T00:53:21.751Z] info Using config file ~/.config/code-server/config.yaml
[2021-04-06T00:53:21.751Z] info HTTP server listening on http://127.0.0.1:8080
[2021-04-06T00:53:21.751Z] info - Authentication is disabled
[2021-04-06T00:53:21.751Z] info - Not serving HTTPS
Now when I try Web Preview -> preview on port 8080 nothing happens I just get a blank screen and on the code console I see the following error
2021-04-06T00:50:04.470Z] error vscode Handshake timed out {"token":"e9b80ff7-10f9-4089-8497-b98688129452"}
I'm not sure what I need to do here ?
In cloud shell editor, create a file with .sh extension, and install the code-server by using these steps:
export VERSION=`curl -s https://api.github.com/repos/cdr/code-server/releases/latest | grep -oP '"tag_name": "\K(.*)(?=")'`
wget https://github.com/cdr/code-server/releases/download/v3.10.2/code-server-3.10.2-linux-amd64.tar.gz
tar -xvzf code-server-3.10.2-linux-amd64.tar.gz
cd code-server-3.10.2-linux-amd64
To run the vscode.sh file using terminal:
./vscode.sh
If a warning “permission denied” comes, type chmod +x vscode.sh and then again proceed with
running the file.
To navigate to the folder:
cd code-server-3.10.2-linux-amd64/
To navigate to the bin:
cd bin/
To start the server :
./code-server --auth none --port 8080
Now you can see the VSCode IDE in your browser either by using web preview->preview on port 8080 option or the HTTP server link in your terminal.
My gut is saying that one must study this article (Expose code-server) in great detail. I think you will find that Code server is listening on IP address 127.0.0.1 at port 8080. Your thinking then is to access this server using Web Preview on port 8080 .... however ... pay attention to the IP addresses of your virtual machine. The IP address 127.0.0.1 is known as the loopback address. It is ONLY accessible to applications running on the SAME machine. My belief is that when you run Web Preview, you are trying to access the IP address of your Cloud Shell machine which is NOT 127.0.0.1.
If you read the above article, the story goes on to show how to use SSH forwarding to provide a front-end to whatever this application may be.

Travis CI Deploy by SSH Script/Hosts issue

I have a django site which I would like to deploy to a Digital Ocean server everytime a branch is merged to master. I have it mostly working and have followed this tutorial.
.travis.yml
language: python
python:
- '2.7'
env:
- DJANGO_VERSION=1.10.3
addons:
ssh_known_hosts: mywebsite.com
git:
depth: false
before_install:
- openssl aes-256-cbc -K *removed encryption details* -in travis_rsa.enc -out travis_rsa -d
- chmod 600 travis_rsa
install:
- pip install -r backend/requirements.txt
- pip install -q Django==$DJANGO_VERSION
before_script:
- cp backend/local.env backend/.env
script: python manage.py test
deploy:
skip_cleanup: true
provider: script
script: "./travis-deploy.sh"
on:
all_branches: true
travis-deploy.sh - runs when the travis 'deploy' task calls it
#!/bin/bash
# print outputs and exit on first failure
set -xe
if [ $TRAVIS_BRANCH == "master" ] ; then
# setup ssh agent, git config and remote
echo -e "Host mywebsite.com\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
eval "$(ssh-agent -s)"
ssh-add travis_rsa
git remote add deploy "travis#mywebsite.com:/home/dean/se_dockets"
git config user.name "Travis CI"
git config user.email "travis#mywebsite.com"
git add .
git status # debug
git commit -m "Deploy compressed files"
git push -f deploy HEAD:master
echo "Git Push Done"
ssh -i travis_rsa -o UserKnownHostsFile=/dev/null travis#mywebsite.com 'cd /home/dean/se_dockets/backend; echo hello; ./on_update.sh'
else
echo "No deploy script for branch '$TRAVIS_BRANCH'"
fi
Everything works find until things get to the 'deploy' stage. I keep getting error messages like:
###########################################################
# WARNING: POSSIBLE DNS SPOOFING DETECTED! #
###########################################################
The ECDSA host key for mywebsite.com has changed,
and the key for the corresponding IP address *REDACTED FOR STACK OVERFLOW*
is unknown. This could either mean that
DNS SPOOFING is happening or the IP address for the host
and its host key have changed at the same time.
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
* REDACTED FOR STACK OVERFLOW *
Please contact your system administrator.
Add correct host key in /home/travis/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/travis/.ssh/known_hosts:11
remove with: ssh-keygen -f "/home/travis/.ssh/known_hosts" -R mywebsite.com
Password authentication is disabled to avoid man-in-the-middle attacks.
Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks.
Permission denied (publickey,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Script failed with status 128
INTERESTINGLY - If I re-run this job the 'git push' command will succeed at pushing to the deploy remote (my server). However, the next step in the deploy script which is to SSH into the server and run some postupdate commands will fail for the same reason (hosts fingerprint change or something). Or, it will ask for travis#mywebsite.com password (it has none) and will hang on the input prompt.
Additionally when I debug the Travis CI build and use the SSH url you're given to SSH into the machine Travis CI runs on - I can SSH into my own server from it. However it takes multiple tries to get around the errors.
So - this seems to be a fluid problem with stuff persisting from builds into the next on retries causing different errors/endings.
As you can see in my .yml file and the deploy script I have attempted to disable various host name checks and added the domain to known hosts etc... all to no avail.
I know I have things 99% set up correctly as things do mostly succeed when I retry the job a few times.
Anyone seen this before?
Cheers,
Dean

Run a ESP local for development

When i try to run a local ESP then i get this error.
ERROR:Fetching service config failed(status code 403, reason Forbidden, url ***)
I have a new created service account this account works fine with gcloud cli.
System: OSX Sierra with Docker for MAC
this is the command that i use to start the container:
docker run -d --name="esp" --net="host" -v ~/Downloads:/esp gcr.io/endpoints-release/endpoints-runtime:1.0 -s 2017-02-07r5 -v echo.endpoints.****.cloud.goog -p 8082 -a localhost:9000 -k /esp/serviceaccount.json
UPDATE:
I have found the error i have set for the service name the verision and for the version the servicename.
Now i get no error but it not works, this is the console output from the container. From my view is all fine but it not works, i can't call the proxy with localhost:8082/***
INFO:Constructing an access token with scope https://www.googleapis.com/auth/service.management.readonly
INFO:Service account email: aplha-api#****.iam.gserviceaccount.com
INFO:Refreshing access_token
INFO:Fetching the service configuration from the service management service
nginx: [warn] Using trusted CA certificates file: /etc/nginx/trusted-ca-certificates.crt
This is the used correct command:
docker run -d --name="esp-user-api" --net="host" -v ~/Downloads:/esp gcr.io/endpoints-release/endpoints-runtime:1.0 -s echo.endpoints.***.cloud.goog -v 2017-02-07r5 -p 8082 -a localhost:9000 -k /esp/serviceaccount.json
Aron, I assume:
(1) you are following this user guide: https://cloud.google.com/endpoints/docs/running-esp-localdev
(2) And you do have a backend running on localhost:9000
Have you issued a curl request as suggested in that user guide to localhost:8082/***? does curl command get stuck or returns any error message?
If you don't have a local backend running yet, I would recommend you to follow the user guide above to run a local backend. Note this guide will instruct you to run it at port 8080, so you'll need to change your docker run command from "-a localhost:9000" to "-a localhost:8080" as well.
Also, please note this user guide is for linux env. We haven't tried this set up in a Mac env yet. We do notice some user gets this working on Windows docker with extra work, where he sets backend to "IP of docker NIC". Note "-a" is short for "--backend".
see https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/google-cloud-endpoints/4sRaSkigPiU/KY8g46NSBgAJ

AWS: Restarting SVN server after reboot - currently "unable to connect to a repository at URL" and "no repository found"

My AWS server was recently rebooted, and on it I had my SVN repository hosted. The SVN path was like so:
svn://ec2-54-xxx-xx-xxx.us-east-2.compute.amazonaws.com/myproject
Now when I try to visit it with TortiseSVN I get:
Unable to connect to a repository at URL
'svn://ec2-54-xxx-xx-xxx.us-east-2.compute.amazonaws.com/myproject'
No repository found in
'svn://ec2-54-xxx-xx-xxx.us-east-2.compute.amazonaws.com/myproject'
When I get into my server I run the following
cd /home/svn/myproject
sudo /usr/bin/svnserve -d
Sure enough I see it running:
[ec2-user#ip-172-xxx-xx-xxx svn]$ ps -ef | grep svn
root 29145 1 0 21:02 ? 00:00:00 /usr/bin/svnserve -d
ec2-user 29157 29108 0 21:02 pts/4 00:00:00 grep svn
But my attempts to hit it fail, regardless. I have been using the svn:// before, but when I tried https:// it gave me Error running context: No connection could be made because the target machine actively refused it and http:// resulted in Redirect cycle detected for URL
Any suggestions on what I'm missing? I'm almost certain it's something simple and dumb, but I've been working it for over 60 minutes now.

ERROR: Sonar server 'http://localhost:9000' can not be reached

when running the following command:
cmd /c C:\sonar-runner-2.4\bin\sonar-runner.bat
(sonar runner is installed on the build machine)
i get the following errors:
ERROR: Sonar server 'http://localhost:9000' can not be reached
ERROR: Error during Sonar runner execution
ERROR: java.net.ConnectException: Connection refused: connect
ERROR: Caused by: Connection refused: connect
what can cause these errors?
Hi dinesh,
this is my sonar-runner.properties file:
sonar.projectKey=NDM
sonar.projectName=NDM
sonar.projectVersion=1.0
sonar.visualstudio.solution=NDM.sln
#sonar.sourceEncoding=UTF-8
sonar.web.host:sonarqube
sonar.web.port=9000
# Enable the Visual Studio bootstrapper
sonar.visualstudio.enable=true
# Unit Test Results
sonar.cs.vstest.reportsPaths=TestResults/*.trx
# Required only when using SonarQube < 4.2
sonar.language=cs
sonar.sources=.
As you can see i set the sonar.web.host:sonarqube
sonar.web.port=9000 but when i run sonar-runner.bat i still get the
ERROR: Sonar server 'http://localhost:9000' can not be reached - why is it still looking for localhost:9000
and not sonarqube:9000 as i set?
i saw that in the log of sonar-runner.bat there the following line:
INFO: Work directory: D:\sTFS\26091\Sources\NDM\Source..sonar
while my solution is in D:\sTFS\26091\Sources\NDM\Source\
could this be the problem?
thanks,
Guy
If you use SonarScanner CLI with Docker, you may have this error because the SonarScanner container can not access to the Sonar UI container.
Note that you will have the same error with a simple curl from another container:
docker run --rm byrnedo/alpine-curl 127.0.0.1:9000
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
The solution is to connect the SonarScanner container to the same docker network of your sonar instance, for instance with --network=host:
docker run --network=host -e SONAR_HOST_URL='http://127.0.0.1:9000' --user="$(id -u):$(id -g)" -v "$PWD:/usr/src" sonarsource/sonar-scanner-cli
(other parameters of this command comes from the SonarScanner CLI documentation)
I got the same issue, and I changed to IP and it working well
Go to System References --> Network --> Advanced --> Open TCP/IP tabs --> copy the IPv4 Address.
change that IP instead localhost
Hope this can help
You should configure the sonar-runner to use your existing SonarQube server. To do so, you need to update its conf/sonar-runner.properties file and specify the SonarQube server URL, username, password, and JDBC URL as well. See https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner for details.
If you don't yet have an up and running SonarQube server, then you can launch one locally (with the default configuration) - it will bind to http://localhost:9000 and work with the default sonar-runner configuration. See https://docs.sonarqube.org/latest/setup/get-started-2-minutes/ for details on how to get started with the SonarQube server.
For others who ran into this issue in a project that is not using a sonar-runners.property file, you may find (as I did) that you need to tweak your pom.xml file, adding a sonar.host.url property.
For example, I needed to add the following line under the 'properties' element:
<sonar.host.url>https://sonar.my-internal-company-domain.net</sonar.host.url>
Where the url points to our internal sonar deployment.
For me the issue was that the maven sonar plugin was using proxy servers defined in the maven settings.xml. I was trying to access the sonarque on another (not localhost alias) and so it was trying to use the proxy server to access it. Just added my alias to nonProxyHosts in settings.xml and it is working now. I did not face this issue in maven sonar plugin 3.2, only after i upgraded it.
<proxy>
<id>proxy_id</id>
<active>true</active>
<protocol>http</protocol>
<host>your-proxy-host/host>
<port>your-proxy-host</port>
<nonProxyHosts>localhost|127.0.*|other-non-proxy-hosts</nonProxyHosts>
</proxy>enter code here
The issue occurred with me in a different way a little a while ago,
I had a docker container running normally in the main network of my host machine accessible via the browser on the normal localhost:9000. But whenever the scanner wants to connect to the server it couldn't despite being on the same network of the host.
I made sure they are, because on the docker run command I mentioned --network=bridge
So the trick was that I pointed to the actual local ip of mine instead of just writing localhost
you can know the ip of your machine by typing ipconfig on windows or ifconfig on linux
so on the scan docker run command I have pointed to the server like that -Dsonar.host.url=http://192.168.1.2:9000 where 192.168.1.2 is my local host address
That was my final docker commands to run the Server:
docker run -d --name sonarqube \
--network=bridge \
-p 9000:9000 \
-e SONAR_JDBC_USERNAME=<db username> \
-e SONAR_JDBC_PASSWORD=<db password>\
-v sonarqube_data:/opt/sonarqube/data \
-v sonarqube_extensions:/opt/sonarqube/extensions \
-v sonarqube_logs:/opt/sonarqube/logs \
sonarqube:community
and that's for the Scanner:
docker run \
--network=bridge \
-v "<local path of the project to scan>:/usr/src" sonarsource/sonar-scanner-cli \
-Dsonar.projectKey=<project key> \
-Dsonar.sources=. \
-Dsonar.host.url=http://<local-ip>:9000 \
-Dsonar.login=<token>
In the config file there is a colon instead of an equal sign after the sonar.web.host.
Is:
sonar.web.host:sonarqube
Should be
sonar.web.host=sonarqube
In sonar.properties file in conf folder I had hardcoaded ip of my machine where sobarqube was installed in property sonar.web.host=10.9 235.22 I commented this and it started working for me.
Please check if postgres(or any other database service) is running properly.
When you allow the 9000 port to firewall on your desired operating System the following error "ERROR: Sonar server 'http://localhost:9000' can not be reached" will remove successfully.In ubuntu it is just like as by typing the following command in terminal "sudo ufw allow 9000/tcp" this error will removed from the Jenkins server by clicking on build now in jenkins.