Currently I have Artifactory set up through a system.yaml file
configVersion: 1
shared:
security:
exposeApplicationHeaders: true
node:
id: "*.example.com"
ip: artifacts.example.com
metrics:
enabled: true
artifactory:
#port: 8081
tomcat:
httpsConnector:
enabled: true
port: 8443
certificateFile: "$JFROG_HOME/artifactory/var/etc/artifactory/security/trusted/server2.crt"
certificateKeyFile: "$JFROG_HOME/artifactory/var/etc/artifactory/security/trusted/server.key"
frontend:
featureToggler:
commonProjects: true
And I'm able to access the webview on port 8082 through https just fine
I created a repo for conan artifacts and generated an api key. Then using the "set me up" prompt I ran the following commands on my dev machine
conan remote add myremote https://artifacts.example.com:8081/artifactory/api/conan/myremote
conan user -p <apikey> -r myremote will
I then get the following error from Conan
ERROR: HTTPSConnectionPool(host='artifacts.example.com', port=8081): Max retries exceeded with url: /artifactory/api/conan/myremote/v1/ping (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1131)')))
Unable to connect to myremote=https://artifacts.example.com:8081/artifactory/api/conan/myremote
1. Make sure the remote is reachable or,
2. Disable it by using conan remote disable,
Then try again.
I tried to repeat the same steps but using http instead of http and all worked fine. What am I doing wrong that won't let https access work?
[problem]
Is it possible to output "port 8080" in this error log as "port 80"?
[procedure]
$ cf push
$ cf set-health-check <myapp> --endpoint / health
$ cf logs <myapp>
-
[HEALTH / 0] ERR Failed to make HTTP request to '/ health' on port 8080: received status code 404 in 0ms
[CELL / 0] ERR Timed out after 10m0s: health check never passed.
-
It looks like you might have a typo in one of your commands. There is a space between / and health in the cf set-health-check command. That appears to be injecting a space into the URL that's used by the health check, so it's trying to reach the literal URL /%20health, which doesn't exist, hence the 404 response.
Try cf set-health-check <myapp> --endpoint "/health" and see if that works.
Sorry, not sure about this question.
Is it possible to output "port 8080" in this error log as "port 80"?
Your app seems to be listening on the correct port, as there's a 404 which means something is listening and responding. Port 8080 is also the correct port for Cloud Foundry apps on which 99.9% of time your app should be listening so I wouldn't try to change that.
Hope that helps!
I just built an NPM Verdaccio private registry server within our local network and I would like configure an UPLINK to our remote NPM Verdaccio server which is hosted at AWS (and also keep the original npmjs registry).
snippet from Verdaccio config.yaml
uplinks:
npmjs:
url: https://registry.npmjs.org/
our-NPM-AWS-server:
url: https://our-NPM-AWS-server.com
based on the documentation (Verdaccio_UPLINK), I have to set the Authentication parameters there, anyhow.
I found the usage of the UPLINKS here - uplink authorization & here - getting an Auth Token , but it is pretty confusing for me because I am not sure what to set as an AUTH method:
auth:
token:
type: bearer | basic,
token: "token",
token_env: true | <get name process.env> `NPM_TOKEN`
I was not able to find any tutorial which would guide me, so I would like to ask for some insight & help - what is necessary to set on the Internal NPM server & also on the remote NPM AWS server.
Configuration:
Internal NPM server
ubuntu 16.04, node v8.11.1, npm v5.8, Verdaccio v.2.7.4, access is controlled by .htpasswd, NPM is accessible on port 80 (listens on http://127.0.0.1:4873)
Remote own NPM server at AWS
ubuntu 14.04, node v6.14.1, npm v3.10.10, Verdaccio v.2.7.4, access is controlled by .htpasswd, NPM is accessible only via 443 from the outside (proxy_http listens on http://127.0.0.1:4873 with an url_prefix: https://our-NPM-AWS-server.com)
Both servers are operating normally (you can log there with your NPM account, push the packages, etc).
thank you very much
EDIT 2018-04-26
The AWS NPM server is registered into Application ELB, which listens on port 443. The AWS NPM server listens on port 443 and is located in private subnet.
I tried to place AWS Verdaccio instance into public subnet and to access it directly without ELB, however it didnt have any affect and the behavior was same.
The config.yaml file of AWS NPM
The UPLINKS part was not changed
packages:
'#*/*':
# scoped packages
access: $all
publish: $authenticated
proxy: npmjs
'**':
# allow all users (including non-authenticated users) to read and
# publish all packages
#
# you can specify usernames/groupnames (depending on your auth plugin)
# and three keywords: "$all", "$anonymous", "$authenticated"
access: $authenticated
# allow all known users to publish packages
# (anyone can register by default, remember?)
publish: $authenticated
# if package is not available locally, proxy requests to 'npmjs' registry
proxy: npmjs
I tried to set
'**':
access: $all
However, it didnt have any effect.
The config.yaml of Internal Verdaccio Server
uplinks:
aws:
url: https://our-NPM-AWS-server.com/
#strictUrlMatch: false
headers:
authorization: "Basic <token_which_I_harvested_from_/.npmrc_file>"
packages:
'#*/*':
# scoped packages
access: $all
publish: $authenticated
proxy: aws
'**':
# allow all users (including non-authenticated users) to read and
# publish all packages
#
# you can specify usernames/groupnames (depending on your auth plugin)
# and three keywords: "$all", "$anonymous", "$authenticated"
access: $all
# allow all known users to publish packages
# (anyone can register by default, remember?)
publish: $authenticated
# if package is not available locally, proxy requests to 'npmjs' registry
proxy: aws
On Internal Verdaccio instance, I tried to get some package from AWS Verdaccio instance
npm pack --verbose verdaccio-bitbucket
and this is log from AWS Verdaccio:
{"name":"verdaccio","hostname":"hostname_our-NPM-AWS-server","pid":8494,"sub":"in",
"level":30,"req":{"method":"GET","url":"/verdaccio-bitbucket",
"headers":{"host":"our-NPM-AWS-server.com","x-forwarded-for"
:"Public_IP_of_Internal_Verdaccio, 10.XXX.XX.XXX","x-forwarded-proto"
:"https","x-forwarded-port":"443","x-amzn-trace-id":
"Root=X-XXXXXX-XXXXXXXXXXXXXXXX","accept":"application/json;",
"accept-encoding":"gzip","user-agent":"npm (verdaccio/2.7.4)",
"via":"1.1 f8d74eab3cc6 (Verdaccio)","authorization":"<Classified>",
"x-forwarded-host":"our-NPM-AWS-server.com",
"x-forwarded-server":"our-NPM-AWS-server.com","connection":"Keep-Alive"},
"remoteAddress":"127.0.0.1","remotePort":42608},"ip":"127.0.0.1",
"msg":"#{ip} requested '#{req.method} #{req.url}'",
"time":"2018-04-26T20:12:38.893Z","v":0}
{"name":"verdaccio","hostname":"hostname_our-NPM-AWS-server","pid":8494,"sub":"in",
"level":35,"request":{"method":"GET","url":"/verdaccio-bitbucket"},
"remoteIP":"Public_IP_of_Internal_Verdaccio, 10.XXX.XX.XXX via
127.0.0.1","**status":403,"error":"unregistered users are not allowed
to access package verdaccio-bitbucket"**,"bytes":
"in":0,"out":180},"msg":"#{status}, user: #{user}(#{remoteIP}),
req: '#{request.method} #{request.url}', error: #{!error}",
"time":"2018-04-26T20:12:38.895Z","v":0}
and this is log from Internal Verdaccio, where the command was ran from:
http --> 200, req: 'GET https://our-NPM-AWS-server.com/verdaccio-bitbucket' (streaming)
http --> 200, req: 'GET https://our-NPM-AWS-server.com/verdaccio-bitbucket', bytes: 0/34578
http <-- 200, user: <npm_account>(127.0.0.1), req: 'GET /verdaccio-bitbucket', bytes: 0/5038
http <-- 500, user: <npm_account>(127.0.0.1), req: 'GET /verdaccio-bitbucket/-/verdaccio-bitbucket-1.0.0.tgz', error: bad uplink status code: 403
http <-- 500, user: <npm_account>(127.0.0.1), req: 'GET /verdaccio-bitbucket/-/verdaccio-bitbucket-1.0.0.tgz', error: bad uplink status code: 403
http <-- 500, user: <npm_account>(127.0.0.1), req: 'GET /verdaccio-bitbucket/-/verdaccio-bitbucket-1.0.0.tgz', error: bad uplink status code: 403
Your configuration is correct but slightly wrong. Let me fix it.
uplinks:
aws:
url: https://our-NPM-AWS-server.com/
#strictUrlMatch: false
headers:
authorization: "Bearer <token_which_I_harvested_from_/.npmrc_file>"
Do not use Basic, it is Bearer verdaccio uses JWT, unfortunately, verdaccio middleware does not accept bearer in lowercase (it does since verdaccio#v3.0.0-beta.7).
For clarification about Basis and JWT. Since version verdaccio#2.3.0 all tokes are generated with JWT library, somehow, for legacy/unit testing reasons we still accept Basis authentification headers, but, all new tokens generated since verdaccio#2.3.0 must use Bearer in headers instead Basis.
There are a couple of issues I will report on Github, minor ones, but still, causes issues like this one.
I hope it helps.
I'm experimenting with CF in my local bosh-lite setup.
The apps that I deploy into if work well. I am now trying to follow the steps here
https://github.com/cf-platform-eng/cf-community-workshop/blob/master/demos/service-broker-lab.adoc
to try out the custom service broker setup.
The https://github.com/mstine/haash-broker application starts and is running fine:
$ cf apps
name requested state instances memory disk urls
haash-broker started 1/1 768M 1G haash-broker.vbox.mojito, haash-broker.192.168.50.6.xip.io
I can access it from my host machine browser well:
http://haash-broker.192.168.50.6.xip.io/v2/catalog
But when I execute the
cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.6.xip.io
I get
$ cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.6.xip.io
Creating service broker haash-broker as admin...
FAILED
Server error, status code: 502, error code: 10001, message: The service broker could not be reached: http://haash-broker.192.168.50.6.xip.io/v2/catalog
When I log in into the CC VM:
$ bosh -e vbox -f cf ssh api/eb4cec99-bab1-4513-a980-fb92775ac2d8
I can ping the hostname:
api/eb4cec99-bab1-4513-a980-fb92775ac2d8:~$ sudo ping haash-broker.192.168.50.6.xip.io
PING haash-broker.192.168.50.6.xip.io (192.168.50.6) 56(84) bytes of data.
64 bytes from 192.168.50.6: icmp_seq=1 ttl=64 time=0.080 ms
But wget connection gets refused:
api/eb4cec99-bab1-4513-a980-fb92775ac2d8:~$ wget http://warreng:natedogg#haash-broker.192.168.50.6.xip.io/v2/catalog
--2018-04-06 04:19:05-- http://warreng:*password*#haash-broker.192.168.50.6.xip.io/v2/catalog
Resolving haash-broker.192.168.50.6.xip.io (haash-broker.192.168.50.6.xip.io)... 192.168.50.6
Connecting to haash-broker.192.168.50.6.xip.io (haash-broker.192.168.50.6.xip.io)|192.168.50.6|:80... failed: Connection refused.
The firewall permits everything on that VM (sudo iptables -L).
The hostname gets resolved properly. The ping works and the 80 port is open on the target IP, since I can reach it from my host browser.
How can that be that the wget doesn't work in such situation?
This also seems to be the reason for me failing to create a service broker cf create-service-broker
UPDATE
I've managed to to execute the cf create-service-broker command with URL of an nginx reverse proxy running outside of my bosh-lite environment. The proxy redirects to the same initial URL http://haash-broker.192.168.50.6.xip.io
and the command succeeds in this way.
But the subsequent
cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.1.xip.io:9999
cf enable-service-access haash
cf create-service HaaSh basic my-hash
(where haash-broker.192.168.50.1.xip.io:9999 is my nginx proxy) fails with
Server error, status code: 502, error code: 10001, message: The service broker rejected the request to http://haash-broker.192.168.50.1.xip.io:9999/v2/service_instances/4ef19154-d238-4cb3-8003-803fba53af3f?accepts_incomplete=true. Status Code: 400 Bad Request, Body: {"timestamp":1523008856993,"error":"Bad Request","status":400,"message":""}
I can see in both nginx and broker app logs that the the request reaches the broker and it answers with 400.
Debugging now why.
Can you post the result of --server-response option used with wget? Also what happens when you try to curl the broker?
Broker requires credentials, but it is interesting if it responds with 401 or 500 on the first request that wget makes without credentials.
How to add self-signed certificate to Cloud Foundry (PCFDev), so I would be able to deploy with Docker Image from private Docker Registry?
For this example I'm using PCFDev:
user#work:(0):~/Documents/$ cf push app-ui -o nexus-dev/app/app-ui:latest
Creating app app-ui in org pcfdev-org / space pcfdev-space as user...
OK
Creating route app-ui.local.pcfdev.io...
OK
Binding app-ui.local.pcfdev.io to app-ui...
OK
Starting app app-ui in org pcfdev-org / space pcfdev-space as user...
Creating container
Successfully created container
Staging...
Staging process started ...
Failed to talk to docker registry: Get https://nexus-dev/v2/: x509: certificate signed by unknown authority
Failed getting docker image by tag: Error parsing HTTP response: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>\r\n<body bgcolor=\"whit
e\">\r\n<center><h1>400 Bad Request</h1></center>\r\n<center>The plain HTTP request was sent to HTTPS port</center>\r\n<hr><center>nginx/1.10.0 (Ubuntu)</center>\r\n</body>\r\n</html>\r\n"
Staging process failed: Exit trace for group:
builder exited with error: failed to fetch metadata from [app/app-ui] with tag [latest] and insecure registries [] due to Error parsing HTTP response: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>
400 The plain HTTP request was sent to HTTPS port</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>400 Bad Request</h1></center>\r\n<center>The plain HTTP request was sent to HTTPS port</center>\r\n<hr><center>nginx/1.10.0
(Ubuntu)</center>\r\n</body>\r\n</html>\r\n"
Exit status 2
Staging Failed: Exited with status 2
Destroying container
Successfully destroyed container
FAILED
Error restarting application: StagingError
TIP: use 'cf logs app-ui --recent' for more information
You can start pcfdev with -r option,
e.g.
cf dev start -r host.pcfdev.io:5000
from Insecure Docker Registries