Message is being sent with "send_message" in ejabberd using postman but not received by client - postman

I have configured and install ejabberd on the ubuntu 22.04 and I have successfully configured and create one user with administrator right and as well as create some users into it.
I am using Version
OS - ubuntu 22.04 LTS
ejabberd 22.10
And also configured ejabberd API by mod_http_api and then I test APIs with POSTMAN, almost every (link) "API reference" working fine with it except send_message.
Here is my ejabberd.yml configuration:-
hosts:
- B660M-D2H-DDR4
- localhost
- XX.XXX.37.XX
loglevel: info
ca_file: /opt/ejabberd/conf/cacert.pem
certfiles:
- /opt/ejabberd/conf/server.pem
## If you already have certificates, list them here
# certfiles:
# - /etc/letsencrypt/live/domain.tld/fullchain.pem
# - /etc/letsencrypt/live/domain.tld/privkey.pem
listen:
-
port: 5222
ip: "::"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: true
-
port: 5223
ip: "::"
tls: true
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: true
-
port: 5269
ip: "::"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "::"
module: ejabberd_http
tls: true
request_handlers:
/admin: ejabberd_web_admin
/api: mod_http_api
/bosh: mod_bosh
/captcha: ejabberd_captcha
/upload: mod_http_upload
/ws: ejabberd_http_ws
-
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
/admin: ejabberd_web_admin
/.well-known/acme-challenge: ejabberd_acme
/api: mod_http_api
-
port: 3478
ip: "::"
transport: udp
module: ejabberd_stun
use_turn: true
## The server's public IPv4 address:
# turn_ipv4_address: "203.0.113.3"
## The server's public IPv6 address:
# turn_ipv6_address: "2001:db8::3"
-
port: 1883
ip: "::"
module: mod_mqtt
backlog: 1000
s2s_use_starttls: optional
acl:
admin:
user: "admin#localhost"
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
access_rules:
local:
allow: local
allow: XX.XXX.37.XX
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: loopback
api_permissions:
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
- acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
- acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
shaper:
normal:
rate: 3000
burst_size: 20000
fast: 100000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
custom_headers:
"Access-Control-Allow-Origin": "https://#HOST#"
"Access-Control-Allow-Methods": "GET,HEAD,PUT,OPTIONS"
"Access-Control-Allow-Headers": "Content-Type"
mod_last: {}
mod_mam:
## Mnesia is limited to 2GB, better to use an SQL backend
## For small servers SQLite is a good fit and is very easy
## to configure. Uncomment this when you have SQL configured:
## db_type: sql
assume_mam_usage: true
default: always
mod_mqtt: {}
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
mam: true
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: trusted_network
mod_roster:
versioning: true
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_stun_disco: {}
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
I have two observation with messages (send_message):
client to client (PSI)
postman to Client
In first observation I have success to exchange messages between users on psi(PSI) but when we try to send message with postman using "mod_http_api" API to the client, I am getting result 200 ok, but Message is not being delivered. And it is not showing anywhere (logs).
Am I missing something that is important for receiving a message using ejabberd's REST API with postman?

What a strange problem, I cannot reproduce it. You didn't show your command query, and didn't mention what exact client and configuration you are using.
Summary: Check if the command works correctly when using the ejabberdctl command line tool, and use "normal" message type, and send to bare JID, and use another client for example Gajim (just for debugging the problem).
Details:
I installed ejabberd 22.10 from source code, copied your configuration, disabled cert and tls options, started ejabberd, registered account, logged in it and executed this command:
$ ejabberdctl send_message headline uuu#localhost user1#localhost Restart aaa
The client that was logged in user1#localhost received the stanza, and displayed the headline message:
<message to='user1#localhost'
from='uuu#localhost'
type='headline'
id='18154938236359942834'>
<body>aaa</body>
<subject>Restart</subject>
</message>
Please note: in XMPP, "headline" messages are not stored in the offline storage: they are only received by online sessions with positive priority. Maybe you are sending "headline" messages to sessions that are offline, or online with negative priority, or online with no initial presence?
It's preferable to send a "normal" message, which are stored offline:
ejabberdctl send_message normal uuu#localhost user1#localhost ThisisNormal bbb
Also, make sure your client is logged in with positive priority (this is the standard).

Related

ESP32 MQTT with TLS not working with NATS MQTT

Hi i have a project about connect ESP32 to NATS via MQTT
After I try --insecurity it work normally.
But when I add TLS it doesn't work in my ESP32, i also try with Python it work normally with my self-signed SSL certificate.
I already find solution from here but it not work: https://github.com/espressif/arduino-esp32/issues/5021
My idea code get from here: https://github.com/debsahu/ESP-MQTT-AWS-IoT-Core/blob/master/Arduino/PubSubClient/PubSubClient.ino
Is MQTT TLS in ESP32 not work with self-signed cert or I done something wrong ?
Cert TLS:
"-----BEGIN CERTIFICATE-----\n"
"MIID8TCCAtmgAwIBAgIUfceZXKK1JIqHi57rc98EBmJoy1kwDQYJKoZIhvcNAQEL\n"
"BQAwgYcxCzAJBgNVBAYTAlZOMRAwDgYDVQQIDAd2aWV0bmFtMRAwDgYDVQQHDAd2\n"
"aWV0bmFtMQ4wDAYDVQQKDAVwZWNvbTENMAsGA1UECwwEdGVzdDENMAsGA1UEAwwE\n"
"bXF0dDEmMCQGCSqGSIb3DQEJARYXY3B0cHJpY2UxMjNAb3V0bG9vay5jb20wHhcN\n"
"MjIxMTAzMDgxMDEzWhcNMjMxMTAzMDgxMDEzWjCBhzELMAkGA1UEBhMCVk4xEDAO\n"
"BgNVBAgMB3ZpZXRuYW0xEDAOBgNVBAcMB3ZpZXRuYW0xDjAMBgNVBAoMBXBlY29t\n"
"MQ0wCwYDVQQLDAR0ZXN0MQ0wCwYDVQQDDARtcXR0MSYwJAYJKoZIhvcNAQkBFhdj\n"
"cHRwcmljZTEyM0BvdXRsb29rLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC\n"
"AQoCggEBALRTuKn8m1QuFJI3THb2rkEiKPHD/cdRs/E1Vb96GIBSy4D/s8vJ2OWd\n"
"GHlbLK557OpAH7JrRg6tVEVVr3293u8imwDIcNyOHlBYWSO/DBKGXsoCbOL1u6Gd\n"
"zAn/G+96eX3RUIHRbBF/rE6DZS5Y1Piq7FwdaReHSZhMPB+UMB4xUEC3pC6CzqFt\n"
"xjudk9zT5VpR60XiJAls3YtYpUu4zRZUw2Sb1ZsPmT555QFYbOcF4XlC82MVi/o3\n"
"M91LJ8DyiOvNWxuioIT2frEyIXaTleug3Ev0ALiu8ug9/v/zTWZWq3KA98HZJcm+\n"
"Hr8dChlMewpMpabEi1e0twlzTPw9QyMCAwEAAaNTMFEwHQYDVR0OBBYEFE3SQ0F5\n"
"yzsBkHUcFp/KucgyGHpWMB8GA1UdIwQYMBaAFE3SQ0F5yzsBkHUcFp/KucgyGHpW\n"
"MA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAC+pjnAd9c71DfCv\n"
"RUMvYg93mraaqtoEw22ggtT9AfIZfI+o8L5Nxk5us+9k9IBEP4hi6DHtnFxqfFt8\n"
"YkzNNDMTDvLfg/1INUwg8yBYS9Z2+puoPlPTvaxOJiuz2+DkYV/LnUdTolKPqYrO\n"
"IBIbmwMNz0Bqn1XZ7Mjx9U7p+A2N/87NGl3fo0I0tWBRSGXFZB7IYipgCPQG5Eb+\n"
"ZL9vrgFuNJhAVALvDfwKxCX6VdyNpthAMA7cmra+s0/weZsfQLyU2TtnsIg0uoq0\n"
"L4sTpL6Q7Fr4UpOJrezNx/kuHHxBBKhJtlz4Tcaw/OKu/h2g5jjHFT9yN86KUxSY\n"
"PDH47kk=\n"
"-----END CERTIFICATE-----\n";
// You can use x.509 client certificates if you want
const char *test_client_key =
"-----BEGIN RSA PRIVATE KEY-----\n"
"MIIEogIBAAKCAQEAllaf/na5h3yDr2zoYsYGGqt/d93/AAUTculGTXdGGbRwyHue\n"
"b0BaMeX+ht9siZ82iuaZ/5mJ/kq8WVwlrkegOAvU7SQDoALPM7VLMLSMbnn2Wqog\n"
"WE48TkWU0WddtTFHVDGLX8zMC1TQ1VKyVzp2QtCW9RPJNun9CVJSoZ34uM5hBL1f\n"
"7MY7t/QsDYi14UtULDsSnVz+tDLiPrBkZOPEVhopCH1gvljcDTcICfawyK5nlCKc\n"
"AnUWTHEUzf89WCJkPNk1W3LhscGKfx2bV8XVv+izg2zMLec5aYM/LrJg6HpJzgQJ\n"
"IKBt1tWQkxRvO7LO3znSp8A9DXotvr0MkIqcjQIDAQABAoIBAEMAaF3oW9deTvIn\n"
"/4nF54KLXEv3zGYd3QUhogt0VPGv0XQIZBwA+jGy5zUE7kKHiq9tBsU7kJycgkTx\n"
"JHn/whA4dbUaj+MIXYAWFGSoks3J3Vma6L9yXr4jlKefAcx3IesMCamwhF+odUod\n"
"iQ4HKB2vCRhAsTSgI/27isgst2TlJsGMf7ED2N1jae8ZyOITi2g0F1edRYBwgSHq\n"
"MZvccZh/IpuTOPEVxuITYyQT9WF0TCz7cK4wCP5dACQQB6Or8l2xiUf9dx3I7kwR\n"
"7wvivI+jAoxR/peOXx2o0bHPcqh41rbhbE00XOcIReGoyLsRDvicw3hgFe6UxcEm\n"
"PlpFzaECgYEAyBPpzK3x0iXj66iO6erXzciN5cXF8IZhC7xcCgGOpnjgrMV3FUNv\n"
"L0Qu8zUlTJHfWpITCZawPpbNMaNShykLU6NqxUPXGtaH/xVUZm9VbkRwBQoQKg+w\n"
"x2+hAWTGu4rWtSaWMHJuwI0SYyopvJtBgDO8PkmzDG24RQuRVBSE+ycCgYEAwFu6\n"
"QHVHvVm4ri1FCIK313uXTWoYhKDCm8ygDKT608bHzBoqOcXPT5mcr3IZmZitsg3Y\n"
"DyVvPGmmbLp8FmxXcz2c71e1Bupeq9V8HrMiSgMVPEIRuNKVC7WE/Ymuvpvfd+h/\n"
"RyDCu2wTI4GcJRhmAB+SpjPPOH0qaqV2eHZgSysCgYAO5eyy4QDwtQGTuqlpoaMQ\n"
"H67xPRjQIDF5vjzcQeFtY/LW6p1DaBIPYvRcB8kPOo13IQlp3V6iSnhdCdxLVDMT\n"
"t0dsCPErfm4CAISYXBHwdAgjV+x8NU7kittiTy69KEl0k7r7QIoerGKCH9GbybPG\n"
"6BNMUBCVDFZ8TbA0opKEYQKBgEl0/fxNjTbXA3qoWPt2B8SnMtFiWbiUN50NmHUb\n"
"r5meCIB94XAshQ2NyNMLDJGmR3Z+aOrnzcHRSresw2RAvWiJt9uCr+PTLpIKNZr3\n"
"p3mCEeLwDBp7eGV/TSkRIgUyOzVsOOatsQ+nputhPILB/XnAlN0ZXeHhkoglZcd8\n"
"1Sr/AoGAU7nlyAMQNd/tckwPTnM++0ewrFvwrfpS7f2dhcYbIhfqQ3I03Gwzjkkg\n"
"G85uzTg/8iO4oxPRjqPvc7JaoDDmGY/efQvjR+FdwDOuy+XZPImZIgGjl0yvAMFU\n"
"6azU+OxtwV+Yyfad4rGxaXZsqOIs18to94t2kjI0t8ur/4Q7C5w=\n"
"-----END RSA PRIVATE KEY-----\n"; // to verify the client
const char *test_client_cert =
"-----BEGIN CERTIFICATE-----\n"
"MIIDnjCCAoYCFBRQlTP3aMzr8YtFlYoaVtrPIN6xMA0GCSqGSIb3DQEBCwUAMIGH\n"
"MQswCQYDVQQGEwJWTjEQMA4GA1UECAwHdmlldG5hbTEQMA4GA1UEBwwHdmlldG5h\n"
"bTEOMAwGA1UECgwFcGVjb20xDTALBgNVBAsMBHRlc3QxDTALBgNVBAMMBG1xdHQx\n"
"JjAkBgkqhkiG9w0BCQEWF2NwdHByaWNlMTIzQG91dGxvb2suY29tMB4XDTIyMTEw\n"
"MzA4MTMzMVoXDTIzMDIxMTA4MTMzMVowgY4xCzAJBgNVBAYTAlZOMRAwDgYDVQQI\n"
"DAd2aWV0bmFtMRAwDgYDVQQHDAd2aWV0bmFtMQ4wDAYDVQQKDAVwZWNvbTENMAsG\n"
"A1UECwwEdGVzdDETMBEGA1UEAwwKbXF0dGNsaWVudDEnMCUGCSqGSIb3DQEJARYY\n"
"Y3B0cHJpY2VAMTIzQG91dGxvb2suY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A\n"
"MIIBCgKCAQEAllaf/na5h3yDr2zoYsYGGqt/d93/AAUTculGTXdGGbRwyHueb0Ba\n"
"MeX+ht9siZ82iuaZ/5mJ/kq8WVwlrkegOAvU7SQDoALPM7VLMLSMbnn2WqogWE48\n"
"TkWU0WddtTFHVDGLX8zMC1TQ1VKyVzp2QtCW9RPJNun9CVJSoZ34uM5hBL1f7MY7\n"
"t/QsDYi14UtULDsSnVz+tDLiPrBkZOPEVhopCH1gvljcDTcICfawyK5nlCKcAnUW\n"
"THEUzf89WCJkPNk1W3LhscGKfx2bV8XVv+izg2zMLec5aYM/LrJg6HpJzgQJIKBt\n"
"1tWQkxRvO7LO3znSp8A9DXotvr0MkIqcjQIDAQABMA0GCSqGSIb3DQEBCwUAA4IB\n"
"AQCF33dWLyL/QJKDBNtKc6WwmOn97u74jkIYdgRHgQwNvrmLHRgZPb6Bhzy5KAIY\n"
"qJcPA6Cn/m4utUWjAXRPj9zDT5xyeC843R22KQASjmPBnEyfDZuXmUPjjNJUSUx6\n"
"JGk/bwPQDLT2ID+vl3OInm4ypgwbGaqlhn41m0F2smanuZUFgEmN5+tJpkwK/tVP\n"
"IYHJ5HPnFqDFs84Fp12HU2QcqbOUEZ/d77Yw/dfb20cvgW2xkHKEAhz7d9EpD4ov\n"
"S5ZnelKxvqlVzI2v2I6MJkRdeP2IfYofNfRo2s7S5u+h/2SQu1MbbarS/jd32Ldz\n"
"14EVvDj+sCF2g7skdJ3kYCPI\n"
"-----END CERTIFICATE-----\n"; // to verify the client
Server config:
listen:0.0.0.0:4222
jetstream:{
max_memory_store: 1073741824
max_file_store: 1073741824
}
mqtt {
# Specify a host and port to listen for websocket connections
#
listen: "0.0.0.0:8883"
# It can also be configured with individual parameters,
# namely host and port.
#
# host: "hostname"
# port: 1883
# TLS configuration.
tls {
cert_file: /etc/tls/mqtt/broker/broker.crt
key_file: /etc/tls/mqtt/broker/broker.key
ca_file: /etc/tls/mqtt/ca/ca.crt
verify: true
timeout: 2.0
# verify_and_map: true
}
# no_auth_user: "my_username_for_apps_not_providing_credentials"
# authorization {
# # username: "my_user_name"
# # password: "my_password"
# # token: "my_token"
# # timeout: 2.0
# }
ack_wait: "1m"
max_ack_pending: 100
}
tls:{
cert_file: /etc/tls/natsio/server-cert.pem
key_file: /etc/tls/natsio/server-key.pem
ca_file: /etc/tls/natsio/ca-cert.pem
}
http_port: 8222
# system_account: AAOQAS43OSVDMF3ERYSNL3GMGZRD7GILDGDET6R52NFZKEWJOTTVNYZ4
# resolver: {
# type: full
# dir: './jwt'
# allow_delete: false
# interval: "2m"
# limit: 1000
# }
Logs from NATS server:
test-nats-dev-1 | [1] 2022/11/03 10:29:32.794114 [ERR] 192.168.1.14:57479 - mid:699 - TLS handshake error: remote error: tls: bad certificate
test-nats-dev-1 | [1] 2022/11/03 10:29:37.989099 [ERR] 192.168.1.14:57480 - mid:700 - TLS handshake error: remote error: tls: bad certificate
ESP32 logs:
Attempting MQTT connection...[2959556][E][ssl_client.cpp:37] _handle_error(): [start_ssl_client():276]: (-9984) X509 - Certificate verification failed, e.g. CRL, CA or signature check failed
[2959559][E][WiFiClientSecure.cpp:135] connect(): start_ssl_client: -9984
failed, rc=-2 try again in 5 seconds
Attempting MQTT connection...[2964762][E][ssl_client.cpp:37] _handle_error(): [start_ssl_client():276]: (-9984) X509 - Certificate verification failed, e.g. CRL, CA or signature check failed
[2964765][E][WiFiClientSecure.cpp:135] connect(): start_ssl_client: -9984
failed, rc=-2 try again in 5 seconds
Attempting MQTT connection...[2976298][E][ssl_client.cpp:37] _handle_error(): [start_ssl_client():276]: (-9984) X509 - Certificate verification failed, e.g. CRL, CA or signature check failed
[2976301][E][WiFiClientSecure.cpp:135] connect(): start_ssl_client: -9984
failed, rc=-2 try again in 5 seconds
I found the problem is my cert is using domain for certification and I using IP for connection. That is make it not working.
After change to using domain in connection it works normally.
Domain TLS: https://docs.cpanel.net/knowledge-base/general-systems-administration/what-is-domain-tls/
IP TLS: Is it possible to have SSL certificate for IP address, not domain name?

get status code from a gRPC endpoint not using a gRPC client

How could I get the status code from a gRPC endpoint without having a gRPC client?
I need to test some gRPC endpoints that are behind an AWS application load balancer (target group) with a health check configured to only accept status 12
UNIMPLEMENTED 12 The operation is not implemented or is not supported/enabled in this service.
I tried grpcurl for example:
grpcurl -plaintext 10.1.2.8:8443 AWS.ALB/healthcheck:
But in many cases I get:
Error invoking method "AWS.ALB/healthcheck": failed to query for service descriptor "AWS.ALB": server does not support the reflection API
Any alternatives or ideas? I am just interested in the status code 12 or description UNIMPLEMENTED
For status code 12, you can change it through
alb.ingress.kubernetes.io/success-codes: '0' // change success code to 0
spec:
rules:
- host: server.xxx.com
http:
paths:
- path: /* // set to * to match all grpc service paths
backend:
serviceName: server
servicePort: 9420
Also enable server reflection
## -40,6 +40,7 ## import (
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
+ "google.golang.org/grpc/reflection"
)
const (
## -61,6 +62,8 ## func main() {
}
s := grpc.NewServer()
pb.RegisterGreeterService(s, &pb.GreeterService{SayHello: sayHello})
+ // Register reflection service on gRPC server.
+ reflection.Register(s)
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}

Connecting cassandra-stress to AWS Keyspaces

I've provisions a Keyspace on AWS and in order to make sure it can achieve our desired performance I'm trying to run the cassandra-stress tool on it and compare it to other architectures we're experimenting with.
I managed to connect to it using the following cqlshrc:
[connection]
port = 9142
factory = cqlshlib.ssl.ssl_transport_factory
[ssl]
validate = true
certfile = /root/.cassandra/AmazonRootCA1.pem
And the following command (hoping that soon enough there will be Python3 support, the development was completed this February according to their Jira ticket):
cqlsh cassandra.eu-central-1.amazonaws.com 9142 -u "myuser-at-722222222222" -p "12/12ZmHmtD1klsDk9cgqt/XXXXXXXXxUz6Sy687z/U=" --ssl --cqlversion="3.4.4"
Surprisingly or not, when using the official AWS guides things tend to work.
So I went on and tried connecting the cassandra-stress tool (I have it inside a Docker container, I'd rather keep my OS Java free) to the same Keyspace.
First I converted the AWS AmazonRootCA1.pem into cassandra_truststore.jks using the following commands (explained here):
openssl x509 -outform der -in AmazonRootCA1.pem -out temp_file.der
keytool -import -alias cassandra -keystore cassandra_truststore.jks -file temp_file.der
Now when I'm trying to run the actual tool like this:
./cassandra-stress write -node cassandra.eu-central-1.amazonaws.com -port native=9142 thrift=9142 jmx=9142 -transport truststore=/root/.cassandra/cassandra_truststore.jks truststore-password=mypassword -mode native cql3 user="myuser-at-722222222222" password="12/12ZmHmtD1klsDk9cgqt/XXXXXXXXxUz6Sy687z/U="
I'm getting the following error:
******************** Stress Settings ********************
Command:
Type: write
Count: -1
No Warmup: false
Consistency Level: LOCAL_ONE
Target Uncertainty: 0.020
Minimum Uncertainty Measurements: 30
Maximum Uncertainty Measurements: 200
Key Size (bytes): 10
Counter Increment Distibution: add=fixed(1)
Rate:
Auto: true
Min Threads: 4
Max Threads: 1000
Population:
Sequence: 1..1000000
Order: ARBITRARY
Wrap: true
Insert:
Revisits: Uniform: min=1,max=1000000
Visits: Fixed: key=1
Row Population Ratio: Ratio: divisor=1.000000;delegate=Fixed: key=1
Batch Type: not batching
Columns:
Max Columns Per Key: 5
Column Names: [C0, C1, C2, C3, C4]
Comparator: AsciiType
Timestamp: null
Variable Column Count: false
Slice: false
Size Distribution: Fixed: key=34
Count Distribution: Fixed: key=5
Errors:
Ignore: false
Tries: 10
Log:
No Summary: false
No Settings: false
File: null
Interval Millis: 1000
Level: NORMAL
Mode:
API: JAVA_DRIVER_NATIVE
Connection Style: CQL_PREPARED
CQL Version: CQL3
Protocol Version: V4
Username: myuser-at-722222222222
Password: *suppressed*
Auth Provide Class: null
Max Pending Per Connection: 128
Connections Per Host: 8
Compression: NONE
Node:
Nodes: [cassandra.eu-central-1.amazonaws.com]
Is White List: false
Datacenter: null
Schema:
Keyspace: keyspace1
Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
Replication Strategy Pptions: {replication_factor=1}
Table Compression: null
Table Compaction Strategy: null
Table Compaction Strategy Options: {}
Transport:
factory=org.apache.cassandra.thrift.TFramedTransportFactory; truststore=/root/.cassandra/cassandra_truststore.jks; truststore-password=mypassword; keystore=null; keystore-password=null; ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA;
Port:
Native Port: 9142
Thrift Port: 9142
JMX Port: 9142
Send To Daemon:
*not set*
Graph:
File: null
Revision: unknown
Title: null
Operation: WRITE
TokenRange:
Wrap: false
Split Factor: 1
java.lang.RuntimeException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: cassandra.eu-central-1.amazonaws.com/3.127.48.183:9142 (com.datastax.driver.core.exceptions.TransportException: [cassandra.eu-central-1.amazonaws.com/3.127.48.183] Channel has been closed))
at org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:220)
at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesNative(SettingsSchema.java:79)
at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:69)
at org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:228)
at org.apache.cassandra.stress.StressAction.run(StressAction.java:57)
at org.apache.cassandra.stress.Stress.run(Stress.java:143)
at org.apache.cassandra.stress.Stress.main(Stress.java:62)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: cassandra.eu-central-1.amazonaws.com/3.127.48.183:9142 (com.datastax.driver.core.exceptions.TransportException: [cassandra.eu-central-1.amazonaws.com/3.127.48.183] Channel has been closed))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1424)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:403)
at org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:160)
at org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:211)
... 6 more
I've tried changing some parameters such as the jks password etc. (Just in case I was wrong) but I got a different error message so it's probably not the case.
Did I miss something?
Try using TLP Stress instead.
tlp-stress run RandomPartitionAccess -d 10m --host cassandra.us-east-1.amazonaws.com --port 9142 --username alice --password fLyWYFlTCD5J2gzGAZ –ssl --max-requests 4000 --dc us-east-2 --threads 10
https://thelastpickle.com/tlp-stress/

How to configure multiple anchor peers

I learned form this site that each member on a channel can have multiple anchor peers to prevent SPOF.
I would like to try multiple anchor peer in fabcar-demo.
Kindly let me know how to configure multiple anchor peers.
In order to have more than one anchor peer per organization you need to configure it with configtx.yaml, e.g. you should add new anchor peers into the following section:
Organizations:
- &Org1
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org1MSP
# ID to load the MSP definition as
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
AnchorPeers:
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
- Host: peer0.org1.example.com
Port: 7051
- Host: peer1.org1.example.com
Port: 7051
- &Org2
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org2MSP
# ID to load the MSP definition as
ID: Org2MSP
MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
AnchorPeers:
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
- Host: peer0.org2.example.com
Port: 7051
- Host: peer1.org2.example.com
Port: 7051
This will define two anchor peers per each organization. Next you need to use configtxgen to produce config update transaction to include those anchors peers for both orgs:
configtxgen -profile TwoOrgsChannel -channelID mychannel -outputAnchorPeersUpdate=Org1MSPanchors.tx -asOrg=Org1MSP
configtxgen -profile TwoOrgsChannel -channelID mychannel -outputAnchorPeersUpdate=Org2MSPanchors.tx -asOrg=Org2MSP
To update channel run:
# updating anchors for Org1
CORE_PEER_ADDRESS=peer0.org.example.com peer channel update -f Org1MSPanchors.tx -c mychannel -o orderer.example.com:7050
against endorsing peer of each org respectively.

Is there a way to logformat Dropwizard access.log

I have used the following in my configuration file, still I am getting standard server logs in access.log, is there a way to modify it? Dropwizard version 0.7
server:
adminConnectors:
-
port: 8889
type: http
applicationConnectors:
-
acceptorThreads: 7
port: 8888
selectorThreads: 14
type: http
maxQueuedRequests: 1024
maxThreads: 1024
requestLog:
appenders:
-
archive: true
archivedFileCount: 3
archivedLogFilenamePattern: /var/log/access-%i.log
currentLogFilename: /var/log/access.log
logFormat: '[%d{yyyy-MM-dd HH:mm:ss.SSS}] [%-5level]'
maxFileSize: 200MB
threshold: ALL
timeZone: IST
type: file-size-rolled
timeZone: IST
You need to put the section under "server" (like you have), like this:
server:
requestLog:
appenders:
- type: console
threshold: ALL
logFormat: '%h [%date{ISO8601}] "%r" %s %b %D [%i{User-Agent}]'
and you need to use a "newer" version (I think from 0.9.something) to get the logFormat stuff working.
I think it is because they changed from Jetty access log implementation to using logback access (http://logback.qos.ch/access.html) in newer versions.