We are trying to run Janusgraph version 0.6.2 using AWS opensearch(elasticsearch) version 7.10 as the indexing backend. Things work fine with version 6.x but when we try to connect to version 7.x we encounter the following exception.
org.janusgraph.diskstorage.PermanentBackendException: method [PUT], host [https://vpc-xxxxxx.us-east-2.es.amazonaws.com:443], URI [/_cluster/settings], status line [HTTP/1.1 401 Unauthorized]
{"Message":"Your request: '/_cluster/settings' payload is not allowed."}
Janusgraph version info:
86 [main] INFO org.janusgraph.graphdb.server.JanusGraphServer - JanusGraph Version: 0.6.2
86 [main] INFO org.janusgraph.graphdb.server.JanusGraphServer - TinkerPop Version: 3.5.3
More detailed stack trace is below:
3115 [main] INFO org.janusgraph.diskstorage.Backend - Configuring index [search]
3387 [main] INFO com.newforma.janusgraph.es.awsauth.AWSV4AuthHttpClientConfigCallback - Initialized AWSV4AuthHttpClientConfigCallback for region us-east-2
3782 [main] WARN org.apache.tinkerpop.gremlin.server.util.DefaultGraphManager - Graph [graph] configured at [/etc/opt/janusgraph/janusgraph.properties] could not be instantiated and will not be available in Gremlin Server. GraphFactory message: GraphFactory could not instantiate this Graph implementation [class org.janusgraph.core.JanusGraphFactory]
java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [class org.janusgraph.core.JanusGraphFactory]
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:84)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:80)
... 14 more
Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.es.ElasticSearchIndex
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:79)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:527)
at org.janusgraph.diskstorage.Backend.getIndexes(Backend.java:511)
at org.janusgraph.diskstorage.Backend.<init>(Backend.java:239)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:127)
... 19 more
Caused by: org.janusgraph.diskstorage.PermanentBackendException: method [PUT], host [https://vpc-xxxxxx.us-east-2.es.amazonaws.com:443], URI [/_cluster/settings], status line [HTTP/1.1 401 Unauthorized]
{"Message":"Your request: '/_cluster/settings' payload is not allowed."}
at org.janusgraph.diskstorage.es.ElasticSearchIndex.setupMaxOpenScrollContextsIfNeeded(ElasticSearchIndex.java:445)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.<init>(ElasticSearchIndex.java:388)
... 32 more
Caused by: org.elasticsearch.client.ResponseException: method [PUT], host [https://vpc-xxxxxx.us-east-2.es.amazonaws.com:443], URI [/_cluster/settings], status line [HTTP/1.1 401 Unauthorized]
{"Message":"Your request: '/_cluster/settings' payload is not allowed."}
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:326)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.performRequest(RestElasticSearchClient.java:482)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.performRequest(RestElasticSearchClient.java:473)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.updateClusterSettings(RestElasticSearchClient.java:269)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.setupMaxOpenScrollContextsIfNeeded(ElasticSearchIndex.java:443)
From the stack trace it appears that janusgraph was trying to set a high value for the elasticsearch property max_open_scroll_context. It is 500 by default.
AWS opensearch(elasticsearch) 7.x onwards doesn't let us set cluster properties.
Tried the following from kibana and I was able to get a similar response. This operation was supported in AWS managed elasticsearch 6.x version.
PUT _cluster/settings
{
"persistent" : {
"search.max_open_scroll_context": 1024
},
"transient": {
"search.max_open_scroll_context": 1024
}
}
401 - Unauthorized
{"Message":"Your request: '/_cluster/settings' payload is not allowed."}
We can disable setting max_open_scroll_context property while janugraph starts by setting the property index.[x].elasticsearch.setup-max-open-scroll-contexts to false.
You can read more on this in configuration reference section on elasticsearch https://docs.janusgraph.org/configs/configuration-reference/#indexxelasticsearch
Related
After the software update of Command Line Tools for Xcode to the version 13,4 the gcloud compute ssh command stopped working with the error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate.
I'm not behind proxy or firewall.
What I've tried so far: updating google cloud sdk, then reinstalling, then removing and installing goole cloud sdk from scratch a number of times but the gcloud init command fails to complete with the same error. Downgrading command line tools to 13.2 didn't help. Updating certifi and launching "Install Certificates.command" neither.
output of "gcloud info --run-diagnostics --verbosity debug":
DEBUG: Running [gcloud.info] with arguments: [--run-diagnostics: "True", --verbosity: "debug"]
Network diagnostic detects and fixes local network connection issues.
Checking network connection...⠏DEBUG: Starting new HTTPS connection (1): accounts.google.com:443
Checking network connection...⠛DEBUG: https://accounts.google.com:443 "GET / HTTP/1.1" 302 338
Checking network connection...⠹DEBUG: https://accounts.google.com:443 "GET /ServiceLogin?passive=1209600&continue=https%3A%2F%2Faccounts.google.com%2F&followup=https%3A%2F%2Faccounts.google.com%2F HTTP/1.1" 302 526
DEBUG: https://accounts.google.com:443 "GET /v3/signin/identifier?dsh=S352504070%3A1656098809680794&continue=https%3A%2F%2Faccounts.google.com%2F&followup=https%3A%2F%2Faccounts.google.com%2F&passive=1209600&flowName=WebLiteSignIn&flowEntry=ServiceLogin&ifkv=AX3vH3-l3sW9otbTScMC6LItjgqZXIpEl6jaKQLX4a-o3Z7M4L5oVPqMq_V_Vltgjce-HlGz4y0mFQ HTTP/1.1" 200 None
Checking network connection...⠼DEBUG: Starting new HTTPS connection (1): cloudresourcemanager.googleapis.com:443
DEBUG: Starting new HTTPS connection (1): www.googleapis.com:443
Checking network connection...⠶DEBUG: Starting new HTTPS connection (1): dl.google.com:443
Checking network connection...⠧DEBUG: https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components-2.json HTTP/1.1" 200 190919
Checking network connection...done.
ERROR: Reachability Check failed.
httplib2 cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)
httplib2 cannot reach https://www.googleapis.com/auth/cloud-platform:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)
requests cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects:
HTTPSConnectionPool(host='cloudresourcemanager.googleapis.com', port=443): Max retries exceeded with url: /v1beta1/projects (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)')))
requests cannot reach https://www.googleapis.com/auth/cloud-platform:
HTTPSConnectionPool(host='www.googleapis.com', port=443): Max retries exceeded with url: /auth/cloud-platform (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)')))
Network connection problems may be due to proxy or firewall settings.
Do you have a network proxy you would like to set in gcloud (Y/n)? n
ERROR: Network diagnostic failed (0/1 checks passed).
Property diagnostic detects issues that may be caused by properties.
Checking hidden properties...done.
Hidden Property Check passed.
Property diagnostic passed (1/1 checks passed).
DEBUG: (gcloud.info) Some of the checks in diagnostics failed.
Traceback (most recent call last):
File "/Users/gclouder/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 987, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/Users/gclouder/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 809, in Run
resources = command_instance.Run(args)
File "/Users/gclouder/google-cloud-sdk/lib/surface/info.py", line 91, in Run
raise exceptions.Error('Some of the checks in diagnostics failed.')
googlecloudsdk.core.exceptions.Error: Some of the checks in diagnostics failed.
ERROR: (gcloud.info) Some of the checks in diagnostics failed.
output of "gcloud info":
Google Cloud SDK [391.0.0]
Platform: [Mac OS X, x86_64] uname_result(system='Darwin', node='gclouder.local', release='21.5.0', version='Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:22 PDT 2022; root:xnu-8020.121.3~4/RELEASE_X86_64', machine='x86_64', processor='i386')
Locale: (None, 'UTF-8')
Python Version: [3.7.9 (v3.7.9:13c94747c7, Aug 15 2020, 01:31:08) [Clang 6.0 (clang-600.0.57)]]
Python Location: [/Users/gclouder/.config/gcloud/virtenv/bin/python3]
OpenSSL: [OpenSSL 1.1.1g 21 Apr 2020]
Requests Version: [2.22.0]
urllib3 Version: [1.25.9]
Site Packages: [Enabled]
Installation Root: [/Users/gclouder/google-cloud-sdk]
Installed Components:
gsutil: [5.10]
core: [2022.06.17]
bq: [2.0.75]
System PATH: [/Users/gclouder/.config/gcloud/virtenv/bin:/Users/gclouder/google-cloud-sdk/bin:/Users/gclouder/.nvm/versions/node/v14.19.0/bin:/Users/gclouder/.jenv/shims:/Users/gclouder/.jenv/bin:/usr/local/opt/mysql#5.7/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin]
Python PATH: [/Users/gclouder/google-cloud-sdk/lib/third_party:/Users/gclouder/google-cloud-sdk/lib:/Library/Frameworks/Python.framework/Versions/3.7/lib/python37.zip:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload:/Users/gclouder/.config/gcloud/virtenv/lib/python3.7/site-packages]
Cloud SDK on PATH: [True]
Kubectl on PATH: [/usr/local/bin/kubectl]
Installation Properties: [/Users/gclouder/google-cloud-sdk/properties]
User Config Directory: [/Users/gclouder/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/Users/gclouder/.config/gcloud/configurations/config_default]
Account: [None]
Project: [None]
Current Properties:
[core]
disable_usage_reporting: [True] (property file)
Logs Directory: [/Users/gclouder/.config/gcloud/logs]
Last Log File: [/Users/gclouder/.config/gcloud/logs/2022.06.24/21.26.47.993939.log]
git: [git version 2.32.1 (Apple Git-133)]
ssh: [OpenSSH_8.6p1, LibreSSL 3.3.6]
Update: it was the corporate antivirus that started behaving this way after a software update
Just upgraded istio from 1.4.6 (helm) to istio 1.5.0 (istioctl) [Purged istio and installed from istioctl] but it appears the istiod logs keep throwing the following :
2020-03-16T18:25:45.209055Z info grpc: Server.Serve failed to complete security handshake from "10.150.56.111:56870": remote error: tls: error decrypting message
2020-03-16T18:25:46.792447Z info grpc: Server.Serve failed to complete security handshake from "10.150.57.112:49162": remote error: tls: error decrypting message
2020-03-16T18:25:46.930483Z info grpc: Server.Serve failed to complete security handshake from "10.150.56.160:36878": remote error: tls: error decrypting message
2020-03-16T18:25:48.284122Z info grpc: Server.Serve failed to complete security handshake from "10.150.52.230:44758": remote error: tls: error decrypting message
2020-03-16T18:25:48.288180Z info grpc: Server.Serve failed to complete security handshake from "10.150.57.149:56756": remote error: tls: error decrypting message
2020-03-16T18:25:49.108515Z info grpc: Server.Serve failed to complete security handshake from "10.150.57.151:53970": remote error: tls: error decrypting message
2020-03-16T18:25:49.111874Z info Handling event update for pod contentgatewayaidest-7f4694d87-qmq8z in namespace djin-content -> 10.150.53.50
2020-03-16T18:25:49.519861Z info grpc: Server.Serve failed to complete security handshake from "10.150.57.91:59510": remote error: tls: error decrypting message
2020-03-16T18:25:50.133664Z info grpc: Server.Serve failed to complete security handshake from "10.150.57.203:59726": remote error: tls: error decrypting message
2020-03-16T18:25:50.331020Z info grpc: Server.Serve failed to complete security handshake from "10.150.57.195:59970": remote error: tls: error decrypting message
2020-03-16T18:25:52.110695Z info Handling event update for pod contentgateway-d74b44c7-dtdxs in namespace djin-content -> 10.150.56.215
2020-03-16T18:25:53.312761Z info Handling event update for pod dysonpriority-b6dbc589b-mk628 in namespace djin-content -> 10.150.52.91
2020-03-16T18:25:53.496524Z info grpc: Server.Serve failed to complete security handshake from "10.150.56.111:57276": remote error: tls: error decrypting message
This also leads to no sidecars successfully launching and failing with :
2020-03-16T18:32:17.265394Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 16 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2020-03-16T18:32:19.269334Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 16 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2020-03-16T18:32:21.265214Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 16 successful, 0 rejected; lds updates: 0 successful, 0 rejected
2020-03-16T18:32:23.266159Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 16 successful, 0 rejected; lds updates: 0 successful,
Weirdly other clusters that I upgraded go through fine. Any idea where this error might be popping up from ? istioctl analyze works fine.
error goes away after killing the nodes (recreating) but istio-proxies still fail with :
info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 0 rejected
As far as I know since version 1.4.4 they add istioctl upgrade, which should be used when You want to upgrade istio from 1.4.x to 1.5.0.
The istioctl upgrade command performs an upgrade of Istio. Before performing the upgrade, it checks that the Istio installation meets the upgrade eligibility criteria. Also, it alerts the user if it detects any changes in the profile default values between Istio versions.
The upgrade command can also perform a downgrade of Istio.
See the istioctl upgrade reference for all the options provided by the istioctl upgrade command.
istioctl upgrade --help
The upgrade command checks for upgrade version eligibility and, if eligible, upgrades the Istio control plane components in-place. Warning: traffic may be disrupted during upgrade. Please ensure PodDisruptionBudgets are defined to maintain service continuity.
I made a test on gcp cluster with istio 1.4.6 installed with istioctl and then I used istioctl upgrade from version 1.5.0 and everything works fine.
kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-598796f4d9-lvzdb 1/1 Running 0 12m
istiod-7d9c7bdd6-mggx7 1/1 Running 0 12m
prometheus-b47d8c58c-7spq5 2/2 Running 0 12m
I checked the logs and made some simple examples and no errors occurs in istiod like in your example.
Upgrade prerequisites for istioctl upgrade
Ensure you meet these requirements before starting the upgrade process:
Istio version 1.4.4 or higher is installed.
Your Istio installation was installed using istioctl.
I assume because of the differences between 1.4.x and 1.5.0 there might be some issues when you want to use both of the installatio methods, helm and istioctl. The best option here would be to install istio 1.4.6 with istioctl and then upgrade it to 1.5.0.
I hope this answer your question. Let me know if you have any more questions.
I have a running debezium cluster in AWS, no issues with that. I want to give a try with AWS MSK. So I launched a cluster. Then I launched an EC2 for running my connectors.
Then installed confluent-kafka
sudo apt-get update && sudo apt-get install confluent-platform-2.12
By default the AWS MSK doesn't have schema registry, So I configured it from the connector EC2
Schema registry conf file:
kafkastore.connection.url=z-1.bhuvi-XXXXXXXXX.amazonaws.com:2181,z-3.bhuvi-XXXXXXXXX.amazonaws.com:2181,z-2.bhuvi-XXXXXXXXX.amazonaws.com:2181
kafkastore.bootstrap.servers=PLAINTEXT://b-2.bhuvi-XXXXXXXXX.amazonaws.com:9092,PLAINTEXT://b-4.bhuvi-XXXXXXXXX.amazonaws.com:9092,PLAINTEXT://b-1.bhuvi-XXXXXXXXX.amazonaws.com:9092
Then /etc/kafka/connect-distributed.properties file
bootstrap.servers=b-4.bhuvi-XXXXXXXXX.amazonaws.com:9092,b-3.bhuvi-XXXXXXXXX.amazonaws.com:9092,b-2.bhuvi-XXXXXXXXX.amazonaws.com:9092
plugin.path=/usr/share/java,/usr/share/confluent-hub-components
Install connector:
confluent-hub install debezium/debezium-connector-mysql:latest
start the service
systemctl start confluent-schema-registry
systemctl start confluent-connect-distributed
Now everything started. Then I created a mysql.json file.
{
"name": "mysql-connector-db01",
"config": {
"name": "mysql-connector-db01",
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.server.id": "1",
"tasks.max": "3",
"database.history.kafka.bootstrap.servers": "172.31.47.152:9092,172.31.38.158:9092,172.31.46.207:9092",
"database.history.kafka.topic": "schema-changes.mysql",
"database.server.name": "mysql-db01",
"database.hostname": "172.31.84.129",
"database.port": "3306",
"database.user": "bhuvi",
"database.password": "my_stong_password",
"database.whitelist": "proddb,test",
"internal.key.converter.schemas.enable": "false",
"key.converter.schemas.enable": "false",
"internal.key.converter": "org.apache.kafka.connect.json.JsonConverter",
"internal.value.converter.schemas.enable": "false",
"value.converter.schemas.enable": "false",
"internal.value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"transforms": "unwrap",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState"
"transforms.unwrap.add.source.fields": "ts_ms",
}
}
Create debezium connector
curl -X POST -H "Accept: application/json" -H "Content-Type: application/json" http://localhost:8083/connectors -d #mysql.josn
Then its stated giving this error in the connector EC2.
Dec 20 11:42:36 ip-172-31-44-220 connect-distributed[2630]: [2019-12-20 11:42:36,290] WARN [Producer clientId=producer-3] Got error produce response with correlation id 844 on topic-partition connect-configs-0, retrying (2147482809 attempts left). Error: NOT_ENOUGH_REPLICAS (org.apache.kafka.clients.producer.internals.Sender:637)
Dec 20 11:42:36 ip-172-31-44-220 connect-distributed[2630]: [2019-12-20 11:42:36,391] WARN [Producer clientId=producer-3] Got error produce response with correlation id 845 on topic-partition connect-configs-0, retrying (2147482808 attempts left). Error: NOT_ENOUGH_REPLICAS (org.apache.kafka.clients.producer.internals.Sender:637)
Dec 20 11:42:36 ip-172-31-44-220 connect-distributed[2630]: [2019-12-20 11:42:36,492] WARN [Producer clientId=producer-3] Got error produce response with correlation id 846 on topic-partition connect-configs-0, retrying (2147482807 attempts left). Error: NOT_ENOUGH_REPLICAS (org.apache.kafka.clients.producer.internals.Sender:637)
Dec 20 11:42:36 ip-172-31-44-220 connect-distributed[2630]: [2019-12-20 11:42:36,593] WARN [Producer clientId=producer-3] Got error produce response with correlation id 847 on topic-partition connect-configs-0, retrying (2147482806 attempts left). Error: NOT_ENOUGH_REPLICAS (org.apache.kafka.clients.producer.internals.Sender:637)
It never stops this error message.
Describe of connect-configs
Topic:connect-configs PartitionCount:1 ReplicationFactor:1 Configs:cleanup.policy=compact
Topic: connect-configs Partition: 0 Leader: 2 Replicas: 2 Isr: 2
MSK sets min.in.sync.replicas to 2 for all topics by default (see https://docs.aws.amazon.com/msk/latest/developerguide/msk-default-configuration.html)
It possible that Kafka Connect is producing using ACKs="all" and, since you only have one copy of your topic, it never achieves enough quorum.
I just built an NPM Verdaccio private registry server within our local network and I would like configure an UPLINK to our remote NPM Verdaccio server which is hosted at AWS (and also keep the original npmjs registry).
snippet from Verdaccio config.yaml
uplinks:
npmjs:
url: https://registry.npmjs.org/
our-NPM-AWS-server:
url: https://our-NPM-AWS-server.com
based on the documentation (Verdaccio_UPLINK), I have to set the Authentication parameters there, anyhow.
I found the usage of the UPLINKS here - uplink authorization & here - getting an Auth Token , but it is pretty confusing for me because I am not sure what to set as an AUTH method:
auth:
token:
type: bearer | basic,
token: "token",
token_env: true | <get name process.env> `NPM_TOKEN`
I was not able to find any tutorial which would guide me, so I would like to ask for some insight & help - what is necessary to set on the Internal NPM server & also on the remote NPM AWS server.
Configuration:
Internal NPM server
ubuntu 16.04, node v8.11.1, npm v5.8, Verdaccio v.2.7.4, access is controlled by .htpasswd, NPM is accessible on port 80 (listens on http://127.0.0.1:4873)
Remote own NPM server at AWS
ubuntu 14.04, node v6.14.1, npm v3.10.10, Verdaccio v.2.7.4, access is controlled by .htpasswd, NPM is accessible only via 443 from the outside (proxy_http listens on http://127.0.0.1:4873 with an url_prefix: https://our-NPM-AWS-server.com)
Both servers are operating normally (you can log there with your NPM account, push the packages, etc).
thank you very much
EDIT 2018-04-26
The AWS NPM server is registered into Application ELB, which listens on port 443. The AWS NPM server listens on port 443 and is located in private subnet.
I tried to place AWS Verdaccio instance into public subnet and to access it directly without ELB, however it didnt have any affect and the behavior was same.
The config.yaml file of AWS NPM
The UPLINKS part was not changed
packages:
'#*/*':
# scoped packages
access: $all
publish: $authenticated
proxy: npmjs
'**':
# allow all users (including non-authenticated users) to read and
# publish all packages
#
# you can specify usernames/groupnames (depending on your auth plugin)
# and three keywords: "$all", "$anonymous", "$authenticated"
access: $authenticated
# allow all known users to publish packages
# (anyone can register by default, remember?)
publish: $authenticated
# if package is not available locally, proxy requests to 'npmjs' registry
proxy: npmjs
I tried to set
'**':
access: $all
However, it didnt have any effect.
The config.yaml of Internal Verdaccio Server
uplinks:
aws:
url: https://our-NPM-AWS-server.com/
#strictUrlMatch: false
headers:
authorization: "Basic <token_which_I_harvested_from_/.npmrc_file>"
packages:
'#*/*':
# scoped packages
access: $all
publish: $authenticated
proxy: aws
'**':
# allow all users (including non-authenticated users) to read and
# publish all packages
#
# you can specify usernames/groupnames (depending on your auth plugin)
# and three keywords: "$all", "$anonymous", "$authenticated"
access: $all
# allow all known users to publish packages
# (anyone can register by default, remember?)
publish: $authenticated
# if package is not available locally, proxy requests to 'npmjs' registry
proxy: aws
On Internal Verdaccio instance, I tried to get some package from AWS Verdaccio instance
npm pack --verbose verdaccio-bitbucket
and this is log from AWS Verdaccio:
{"name":"verdaccio","hostname":"hostname_our-NPM-AWS-server","pid":8494,"sub":"in",
"level":30,"req":{"method":"GET","url":"/verdaccio-bitbucket",
"headers":{"host":"our-NPM-AWS-server.com","x-forwarded-for"
:"Public_IP_of_Internal_Verdaccio, 10.XXX.XX.XXX","x-forwarded-proto"
:"https","x-forwarded-port":"443","x-amzn-trace-id":
"Root=X-XXXXXX-XXXXXXXXXXXXXXXX","accept":"application/json;",
"accept-encoding":"gzip","user-agent":"npm (verdaccio/2.7.4)",
"via":"1.1 f8d74eab3cc6 (Verdaccio)","authorization":"<Classified>",
"x-forwarded-host":"our-NPM-AWS-server.com",
"x-forwarded-server":"our-NPM-AWS-server.com","connection":"Keep-Alive"},
"remoteAddress":"127.0.0.1","remotePort":42608},"ip":"127.0.0.1",
"msg":"#{ip} requested '#{req.method} #{req.url}'",
"time":"2018-04-26T20:12:38.893Z","v":0}
{"name":"verdaccio","hostname":"hostname_our-NPM-AWS-server","pid":8494,"sub":"in",
"level":35,"request":{"method":"GET","url":"/verdaccio-bitbucket"},
"remoteIP":"Public_IP_of_Internal_Verdaccio, 10.XXX.XX.XXX via
127.0.0.1","**status":403,"error":"unregistered users are not allowed
to access package verdaccio-bitbucket"**,"bytes":
"in":0,"out":180},"msg":"#{status}, user: #{user}(#{remoteIP}),
req: '#{request.method} #{request.url}', error: #{!error}",
"time":"2018-04-26T20:12:38.895Z","v":0}
and this is log from Internal Verdaccio, where the command was ran from:
http --> 200, req: 'GET https://our-NPM-AWS-server.com/verdaccio-bitbucket' (streaming)
http --> 200, req: 'GET https://our-NPM-AWS-server.com/verdaccio-bitbucket', bytes: 0/34578
http <-- 200, user: <npm_account>(127.0.0.1), req: 'GET /verdaccio-bitbucket', bytes: 0/5038
http <-- 500, user: <npm_account>(127.0.0.1), req: 'GET /verdaccio-bitbucket/-/verdaccio-bitbucket-1.0.0.tgz', error: bad uplink status code: 403
http <-- 500, user: <npm_account>(127.0.0.1), req: 'GET /verdaccio-bitbucket/-/verdaccio-bitbucket-1.0.0.tgz', error: bad uplink status code: 403
http <-- 500, user: <npm_account>(127.0.0.1), req: 'GET /verdaccio-bitbucket/-/verdaccio-bitbucket-1.0.0.tgz', error: bad uplink status code: 403
Your configuration is correct but slightly wrong. Let me fix it.
uplinks:
aws:
url: https://our-NPM-AWS-server.com/
#strictUrlMatch: false
headers:
authorization: "Bearer <token_which_I_harvested_from_/.npmrc_file>"
Do not use Basic, it is Bearer verdaccio uses JWT, unfortunately, verdaccio middleware does not accept bearer in lowercase (it does since verdaccio#v3.0.0-beta.7).
For clarification about Basis and JWT. Since version verdaccio#2.3.0 all tokes are generated with JWT library, somehow, for legacy/unit testing reasons we still accept Basis authentification headers, but, all new tokens generated since verdaccio#2.3.0 must use Bearer in headers instead Basis.
There are a couple of issues I will report on Github, minor ones, but still, causes issues like this one.
I hope it helps.
I am using File system for media and trying to get Web Deploy to work between CM and CD but running into issue when I try to publish media item.
My WebDeploy.config looks like this:
<targetDatabase>web</targetDatabase>
<targetServer>cd-site</targetServer>
<userName>Administrator</userName>
<password>pwd</password>
<localRoot>C:\inetpub\wwwroot\CM\Website</localRoot>
<remoteRoot>C:\inetpub\wwwroot\CD\Website</remoteRoot>
<items hint="list:AddPath">
<media>App_Data/MediaFiles</media>
</items>
I cant seem to get pass this error in sitecore after publish, any ideas?:
5652 02:01:11 INFO Job started: Publish to 'web'
ManagedPoolThread #13 02:01:11 INFO MSDEPLOY: Performing synchronization for App_Data/MediaFiles
ManagedPoolThread #13 02:01:11 ERROR MSDEPLOY: Failed to synchronize folder App_Data/MediaFiles. Please verify that the folder exists and is accessible.
Exception: Microsoft.Web.Deployment.DeploymentAgentUnavailableException: Remote agent (URL http://cd-site/MSDEPLOYAGENTSERVICE) could not be contacted. Make sure the remote agent service is installed and started on the target computer. ---> Microsoft.Web.Deployment.DeploymentException: An unsupported response was received. The response header 'MSDeploy.Response' was '' but 'v1' was expected. ---> System.Net.WebException: The remote server returned an error: (401) Unauthorized.
at System.Net.HttpWebRequest.GetResponse()
at Microsoft.Web.Deployment.AgentClientProvider.GetHttpResponse(HttpWebRequest request)
--- End of inner exception stack trace ---
--- End of inner exception stack trace ---
at Microsoft.Web.Deployment.AgentClientProvider.GetHttpResponse(HttpWebRequest request)
at Microsoft.Web.Deployment.AgentClientProvider.CreateStatusThread(DeploymentSyncContext syncContext)
at Microsoft.Web.Deployment.AgentClientProvider.RemoteDestSync(DeploymentObject sourceObject, DeploymentSyncContext syncContext)
at Microsoft.Web.Deployment.DeploymentObject.SyncToInternal(DeploymentObject destObject, DeploymentSyncOptions syncOptions, PayloadTable payloadTable, ContentRootTable contentRootTable)
at Microsoft.Web.Deployment.DeploymentObject.SyncTo(DeploymentProviderOptions providerOptions, DeploymentBaseOptions baseOptions, DeploymentSyncOptions syncOptions)
at Sitecore.Publishing.WebDeploy.DeploymentTaskRunner.Execute()
I fixed this by installing Web Deploy from WebPI. For some reason when I only installed Web Deploy 3.6 independently the services were not registered properly. After installing via WebPI the publishing job instantly started working.