I am trying to connect to AWS DocumentDB with Node.js/Typescript and Mongoose. I have an EC2 instance setup as SSL tunnel, which works great. I can connect to DocumentDB locally with Studio3T and mongo-cli.
This command works mongo --sslAllowInvalidHostnames --ssl --sslCAFile rds-combined-ca-bundle.pem --username <username> --password <password>
But if I try to connect to the same database with Mongoose, it fails. This is my code and the error:
const options = {
dbName: "prodDB",
user: connectionData.username,
pass: connectionData.password,
tls: true,
tlsCAFile: "../rds-combined-ca-bundle.pem",
tlsAllowInvalidHostNames: true,
};
try {
await connect("mongodb://localhost:27017", options);
} catch (error) {
console.log(error);
}
MongooseServerSelectionError: connect EHOSTUNREACH imagine-ip-address-here:27017
reason: TopologyDescription {
type: 'ReplicaSetNoPrimary',
servers: Map(1) {
'censored:27017' => [ServerDescription]
},
stale: false,
compatible: true,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
setName: 'rs0',
commonWireVersion: 7,
logicalSessionTimeoutMinutes: undefined
}
At this point, I have tried pretty much any possible config in Mongoose and I am getting desperate. Any help is appreciated
This seems to be an issue with mongoose versions >= 6.
Downgrading Mongoose to version 5.13.8 works without a problem.
Mongoose devs are apparently aware of this issue: https://github.com/Automattic/mongoose/issues/11105
Related
I am trying to create an AWS Glue Streaming job that reads from Kafka (MSK) clusters using SASL/SCRAM client authentication for the connection, per
https://aws.amazon.com/about-aws/whats-new/2022/05/aws-glue-supports-sasl-authentication-apache-kafka/
The connection configuration has the following properties (plus adequate subnet and security groups):
"ConnectionProperties": {
"KAFKA_SASL_SCRAM_PASSWORD": "apassword",
"KAFKA_BOOTSTRAP_SERVERS": "theserver:9096",
"KAFKA_SASL_MECHANISM": "SCRAM-SHA-512",
"KAFKA_SASL_SCRAM_USERNAME": "auser",
"KAFKA_SSL_ENABLED": "false"
}
And the actual api method call is
df = glue_context.create_data_frame.from_options(
connection_type="kafka",
connection_options={
"connectionName": "kafka-glue-connector",
"security.protocol": "SASL_SSL",
"classification": "json",
"startingOffsets": "latest",
"topicName": "atopic",
"inferSchema": "true",
"typeOfData": "kafka",
"numRetries": 1,
}
)
When running logs show the client is attempting to connect to brokers using Kerberos, and runs into
22/10/19 18:45:54 INFO ConsumerConfig: ConsumerConfig values:
sasl.mechanism = GSSAPI
security.protocol = SASL_SSL
security.providers = null
send.buffer.bytes = 131072
...
org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
Caused by: org.apache.kafka.common.KafkaException: Principal could not be determined from Subject, this may be a transient failure due to Kerberos re-login
How can I authenticate the AWS Glue job using SASL/SCRAM? What properties do I need to set in the connection and in the method call?
Thank you
I'm trying to push documents from local to elastic server in AWS, and when trying to do so I get 403 error and Logstash keeps on trying to establish connection with the server like so:
[2021-05-09T11:09:52,707][TRACE][logstash.inputs.file ][main] Registering file input {:path=>["~/home/ubuntu/json_try/json_try.json"]}
[2021-05-09T11:09:52,737][DEBUG][logstash.javapipeline ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<Thread:0x5033269f run>"}
[2021-05-09T11:09:53,441][DEBUG][logstash.outputs.amazonelasticsearch][main] Waiting for connectivity to Elasticsearch cluster. Retrying in 4s
[2021-05-09T11:09:56,403][INFO ][logstash.outputs.amazonelasticsearch][main] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://my-dom.co:8001/scans, :path=>"/"}
[2021-05-09T11:09:56,461][WARN ][logstash.outputs.amazonelasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://my-dom.co:8001/scans", :error_type=>LogStash::Outputs::AmazonElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '403' contacting Elasticsearch at URL 'https://my-dom.co:8001/scans/'"}
[2021-05-09T11:09:56,849][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-05-09T11:09:56,853][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-05-09T11:09:57,444][DEBUG][logstash.outputs.amazonelasticsearch][main] Waiting for connectivity to Elasticsearch cluster. Retrying in 8s
.
.
.
I'm using the following logstash conf file:
input {
file{
type => "json"
path => "~/home/ubuntu/json_try/json_try.json"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output{
amazon_es {
hosts => ["https://my-dom.co/scans"]
port => 8001
ssl => true
region => "us-east-1b"
index => "snapshot-%{+YYYY.MM.dd}"
}
}
Also I've exported AWS keys for the SSL to work. Is there anything I'm missing in order for the connection to succeed?
I've been able to solve this by using elasticsearch as my output plugin instead of amazon_es.
This usage will require cloud_id of the target AWS node, cloud_auth for it and also the target index in elastic for the data to be sent to. So the conf file will look something like this:
input {
file{
type => "json"
path => "~/home/ubuntu/json_try/json_try.json"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output{
elasticsearch {
cloud_id: "node_name:node_hash"
cloud_auth: "auth_hash"
index: "snapshot-%{+YYYY.MM.dd}"
}
}
I am getting started with Application development on Matic. And I am following the instruction as provided on the docs https://docs.matic.network/docs/develop/getting-started
But I faced problem while using truffle. After I run the command
truffle migrate --network matic
The error as follow:
Compiling your contracts...
===========================
> Everything is up to date, there is nothing to compile.
Starting migrations...
======================
> Network name: 'matic'
> Network id: 80001
> Block gas limit: 20000000 (0x1312d00)
1_initial_migration.js
======================
Deploying 'Migrations'
----------------------
Error: *** Deployment Failed ***
"Migrations" -- insufficient funds for gas * price + value.
at /usr/local/lib/node_modules/truffle/build/webpack:/packages/deployer/src/deployment.js:365:1
at process._tickCallback (internal/process/next_tick.js:68:7)
Truffle v5.1.55 (core: 5.1.55)
Node v10.19.0
The configuration file of truffle as follow:
const HDWalletProvider = require('truffle-hdwallet-provider');
const fs = require('fs');
const mnemonic = fs.readFileSync(".secret").toString().trim();
module.exports = {
networks: {
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "*", // Any network (default: none)
},
matic: {
provider: () => new HDWalletProvider(mnemonic, `https://rpc-mumbai.matic.today`),
network_id: 80001,
confirmations: 2,
timeoutBlocks: 200,
skipDryRun: true
},
},
// Set default mocha options here, use special reporters etc.
mocha: {
// timeout: 100000
},
// Configure your compilers
compilers: {
solc: {
}
}
}
It worked fine for the develop network using
truffle develop
Can someone tell me how to overcome error while using Matic Test Network?
Make sure you have enough MATIC for the transaction, you can get some from here and transfer them to the account[0]. I had issues deploying with truffle as all my tokens were at account[1] rather than account[0]
in my case it was ETH, ensuring i had tokens in the first account did the trick
I used the python code below to create a port-forwarding session.
But it seems like the session is getting terminated in a few minutes? Can anyone tell me if I am missing something here.
My target is to bind a remote port (80) to a local port (8888).
self._session_parameters = {
"Target": self._instance_id,
"DocumentName": "AWS-StartPortForwardingSession",
"Parameters": {"portNumber"[str(self._remote_port)],
"localPortNumber":[str(self._remote_port)]}
self._response = self._ssm.start_session(
Target=self._session_parameters["Target"],
DocumentName=self._session_parameters["DocumentName"],
Parameters=self._session_parameters["Parameters"])
logger.info("Received response: %s", self._response["SessionId"])
self._session_id, self._token_value = (
self._response["SessionId"],
self._response["TokenValue"])
I am trying to do mongodump in Digital Ocean ubuntu server.I am getting following error.
root#Project:~# sudo mongodump --db <database> --out /root/backup/Localdb
2018-11-07T06:28:09.177+0000 Failed: error getting collections for database `<database>`: error running `listCollections`. Database: `<database>` Err: not authorized on `<database>` to execute command { listCollections: 1, cursor: {}, $readPreference: { mode: "secondaryPreferred" }, $db: `<database>` }
How to fix this issue.
Thanks.