Redis user name setting for Kubernetes deployment - amazon-web-services

i am using redis memory db cluster for cahching . i am tring to set a redis user name inside my kubernetes deployment, as my redis memory db version 6.2 , i have set user name inside deployment. is there a way to set this ? currently i geting error like
"redis connection tls true
Redis connection established
adapter listening on [::]:9123
adapter listening on [::]:9122
LTE / UDP forward with port : 35115
LTE / UDP forward with port : 44801
MQTT connection established
WRONGPASS invalid username-password pair or user is disabled.
panic: WRONGPASS invalid username-password pair or user is disabled. "
can you help me how to solve it ?
i have tried to modif redis config map to set user name

I would have the same problem. Is there an environment variable like REDIS_USER or REDIS_USERNAME which allows to specify the user for the login like in the redis-cli -u user:password#host:6379?
Something like this?
env:
- name: REDIS_USERNAME
value: "$(REDIS_USERNAME)"
- name: REDIS_USER
value: "$(REDIS_USER)"

Related

Unable to connect to the AWS RDS database from ASP.NET Hangfire

I am developing an ASP.NET Core Web API project. In my project, I am using Hangfire to run the background task. Therefore, I am configuring the Hangfire to use the database like this.
public void ConfigureServices(IServiceCollection services)
{
services.AddHangfire(configuration =>
{
configuration.UseSqlServerStorage("Server=(LocalDB)\\MSSQLLocalDB;Integrated Security=true;");
});
//
}
In the above code, I am using Local DB. Now, I am trying to use AWS RDS database since I am deploying my application on the AWS Elastic Beanstalks. I created a function for getting the connection
public static string GetRDSConnectionString()
{
string dbname = "ebdb";
if(string.IsNullOrEmpty(dbname)) return null;
string username = "admin";
string password = "password";
string hostname = "cxcxcxcx.xcxcxcxc.eu-west-2.rds.amazonaws.com:1234";
string port = "1234";
return "Data Source=" + hostname + ";Initial Catalog=" + dbname + ";User ID=" + username + ";Password=" + password + ";";
}
I got the above code from the official AWS documentation. In the above code, what I am not clear is the database name, is the database name always be "ebdb"? I tried to find out the database name. But could not. In the tutorial, it is saying to use ebdb. So, I used it.
Then in configuration, I changed to this.
configuration.UseSqlServerStorage(AppConfig.GetRDSConnectionString());
When I run the code, it is giving me this error.
Win32Exception: The parameter is incorrect
Unknown location
SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 25 - Connection string is not valid)
System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, object providerInfo, bool redirectedUserInstance, SqlConnectionString userConnectionOptions, SessionData reconnectSessionData, bool applyTransientFaultHandling)
Win32Exception: The parameter is incorrect
Basically, it cannot connect to the database when I run my application. But I set the correct credentials. the only thing I double is the database name (ebdb). What is wrong with my configuration and what is wrong? How can I fix it?
Calling a few things out here just incase...
You have your port specified both in your host variable and as a separate port variable...but never use port.
Can you confirm that you are able to access your SQLServer via another means, such as from SQL Management Studio?
RDS uses SSL by default now for connections, my .NET is rusty but would you need to inform the connection string that it needs to run over a secure protocol?
& finally, regarding the AWS Security Group on your RDS instance. Have you opened up the correct port to your machine/network/IP?
This is the screenshot of the RDS db instance security group section in the console.

How to deploy Kafka on Google cloud

I deployed Kafka on Google cloud, I changed listeners to
PLAINTEXT://[internal ip address]:9092
And when I try
sudo ./bin/kafka-topics.sh --list --zookeeper [external IP address]:2181
I can get the topic on the broker. However when I try to produce message to the Kafka broker
sudo ./bin/kafka-console-producer.sh --broker-list [external IP address]:9092
--topic test
following error shows up:
ERROR Error when sending message to topic test with key: null, value:
5 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s)
for test-0: 1506 ms has passed since batch creation plus linger time
I wonder what properties did I set wrong and how to fix it?
You need to set advertised.listeners to the external IP so that clients can correctly connect to it. Otherwise they'll try to connect to the internal IP (since advertised.listeners will default to listeners unless explicitly set)
Ref: https://kafka.apache.org/documentation/#brokerconfigs

Hazelcast cluster over AWS using Docker

Hi am trying to configure hazelcast cluster over AWS.
I am running hazelcast in docker container and using --net=host to use host network config.
when i look at hazelcast logs, I see
[172.17.0.1]:5701 [herald] [3.8] Established socket connection between /[node2]:5701 and /[node1]:47357
04:24:22.595 [hz._hzInstance_1_herald.IO.thread-out-0] DEBUG c.h.n.t.SocketWriterInitializerImpl - [172.17.0.1]:5701 [herald] [3.8] Initializing SocketWriter WriteHandler with Cluster Protocol
04:24:22.595 [hz._hzInstance_1_herald.IO.thread-in-0] WARN c.h.nio.tcp.TcpIpConnectionManager - [172.17.0.1]:5701 [herald] [3.8] Wrong bind request from [172.17.0.1]:5701! This node is not requested endpoint: [node2]:5701
04:24:22.595 [hz._hzInstance_1_herald.IO.thread-in-0] INFO c.hazelcast.nio.tcp.TcpIpConnection - [172.17.0.1]:5701 [herald] [3.8] Connection[id=40, /[node2]:5701->/[node1]:47357, endpoint=null, alive=false, type=MEMBER] closed. Reason: Wrong bind request from [172.17.0.1]:5701! This node is not requested endpoint: [node2]:5701
I can see error saying bind request is coming from 172.17.0.1 to node1, and node1 is not accepting this request.
final Config config = new Config();
config.setGroupConfig(clientConfig().getGroupConfig());
final NetworkConfig networkConfig = new NetworkConfig();
final JoinConfig joinConfig = new JoinConfig();
final TcpIpConfig tcpIpConfig = new TcpIpConfig();
final MulticastConfig multicastConfig = new MulticastConfig();
multicastConfig.setEnabled(false);
final AwsConfig awsConfig = new AwsConfig();
awsConfig.setEnabled(true);
// awsConfig.setSecurityGroupName("xxxx");
awsConfig.setRegion("xxxx");
awsConfig.setIamRole("xxxx");
awsConfig.setTagKey("type");
awsConfig.setTagValue("xxxx");
awsConfig.setConnectionTimeoutSeconds(120);
joinConfig.setAwsConfig(awsConfig);
joinConfig.setMulticastConfig(multicastConfig);
joinConfig.setTcpIpConfig(tcpIpConfig);
networkConfig.setJoin(joinConfig);
final InterfacesConfig interfaceConfig = networkConfig.getInterfaces();
interfaceConfig.setEnabled(true).addInterface("172.29.238.71");
config.setNetworkConfig(networkConfig);
above is the code to configure AWSConfig
Please help me resolve this issue.
Thanks
You are experiencing an issue (#11795) in default Hazelcast bind address selection mechanism.
There are several workarounds available:
Workaround 1: System property
You can set the bind address by providing correct IP address as a hazelcast.local.localAddress system property:
java -Dhazelcast.local.localAddress=[yourCorrectIpGoesHere]
or
System.setProperty("hazelcast.local.localAddress", "[yourCorrectIpGoesHere]")
Read details in System properties chapter of Hazelcast Reference Manual.
Workaround 2: Hazelcast Network configuration
Hazelcast Network configuration allows you to specify which IP addresses can be used to bind the server.
Declarative in hazelcast.xml:
<hazelcast>
...
<network>
...
<interfaces enabled="true">
<interface>10.3.16.*</interface>
<interface>10.3.10.4-18</interface>
<interface>192.168.1.3</interface>
</interfaces>
</network>
...
</hazelcast>
Programmatic:
Config config = new Config();
NetworkConfig network = config.getNetworkConfig();
InterfacesConfig interfaceConfig = network.getInterfaces();
interfaceConfig.setEnabled(true).addInterface("192.168.1.3");
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance(config);
Read details in Interfaces section of Hazelcast Reference Manual.
Update:
With the earlier steps you are able to set a proper bind address - the local one returned by ip addr show for instance. Nevertheless, it could be insufficient if you run Hazelcast in an environment where local IP and public IP differs (clouds, docker).
Next Step: Configure public address
This step is necessary in environments, where cluster nodes doesn't see each other under the reported local address of the other node. You have to set the public address - it's the one which nodes are able to reach (optionally with port specified).
networkConfig.setPublicAddress("172.29.238.71");
// or if a non-default Hazelcast port is used - e.g.9991
networkConfig.setPublicAddress("172.29.238.71:9991");

how to get nova client (v1.1) to use ssh tunnel when retrieving server list

the openstack nova client is giving me fits. i can't figure out how to get it to use a local ssh tunnel url i specify instead of the one it retrieves. so:
from novaclient.v1_1 import client as nova_client
from pprint import pprint
self.__nova_client = nova_client.Client(
'myusername',
'mypassword',
'mytenantname',
'https://localhost:5443/v2.0',
service_type='compute',
insecure=True
)
for server in self.__nova_client.servers.list():
pprint(server)
yields...
requests.exceptions.ConnectionError: HTTPConnectionPool(host='os-compute.vip.mysubdomain.mydomain.com', port=8774): Max retries exceeded with url: /v2/aa0dffecaef543aca072a26fdff5c92b/servers/detail (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
because the "os-compute.vip.mysubdomain.mydomain.com:8774" address is unreachable from where the script is running.
the self.__nova_client = nova_client.Client() bit connects fine because it uses 'https://localhost:5443/v2.0' - the established tunnel i provide. i just need a way to override the "os-compute.vip.mysubdomain.mydomain.com:8774" that it's trying to connect to with a "localhost:8774" tunnel that i set up. but i can't figure out whether/how that's possible.
any guidance will be greatly appreciated.
Your nova client is pulling the service catalogue from keystone through the tunnel setup on your localhost. You will need to explicitly override the endpoint specified in the service catalogue.
One way is to explicitly specify the endpoint, while some of the clients allow you to directly specify the endpoint on construction novaclient doesn't, take a look at nova_client.management_url after you've constructed the object and replace it with your localhost address.

Version incompatibility issue with logsatsh and elasticsearch?

I'm using Logstash 1.4.1 with Elasticsearch (installed as EC2 cluster) 1.1.1 and Elasticsearch AWS plugin 2.1.1.
To try if the Logstash is properly talking to Elasticsearch, I use -
bin/logstash -e 'input { stdin { } } output { elasticsearch { host => <ES_cluster_IP> } }'
and I get -
log4j, [2014-06-10T18:30:17.622] WARN: org.elasticsearch.discovery: [logstash-ip-xxxxxxxx-20308-2010] waited for 30s and no initial state was set by the discovery
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:744)
But when I use -
bin/logstash -e 'input { stdin { } } output { elasticsearch_http { host => <ES_cluster_IP> } }'
it works fine with the below warning -
Using milestone 2 output plugin 'elasticsearch_http'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.1/plugin-milestones {:level=>:warn}
I don't understand why can't I use elasticsearch instead of elasticsearch_http even when versions are compatible.
I'd take care to set the protocol option to one of "http", "transport" and "node". The documentation on this is contradictory - on the one hand it states that it's optional and there is no default, while at the end it says the default differs depending upon code set:
The ‘node’ protocol will connect to the cluster as a normal
Elasticsearch node (but will not store data). This allows you to use
things like multicast discovery. If you use the node protocol, you
must permit bidirectional communication on the port 9300 (or whichever
port you have configured).
The ‘transport’ protocol will connect to the host you specify and will
not show up as a ‘node’ in the Elasticsearch cluster. This is useful
in situations where you cannot permit connections outbound from the
Elasticsearch cluster to this Logstash server.
The ‘http’ protocol will use the Elasticsearch REST/HTTP interface to
talk to elasticsearch.
All protocols will use bulk requests when talking to Elasticsearch.
The default protocol setting under java/jruby is “node”. The default
protocol on non-java rubies is “http”
The problem here is that the protocol setting has some pretty significant impact on how you connect to Elasticsearch and how it will operate, yet it's not clear what it will do when you don't set protocol. Better to pick one and set it -
http://logstash.net/docs/1.4.1/outputs/elasticsearch#protocol
In the Logstash elasticsearch plugin page has mention:
VERSION NOTE: Your Elasticsearch cluster must be running Elasticsearch 1.1.1. If you use
any other version of Elasticsearch, you should set protocol => http in this plugin.
So it is not version incompatibility.
Elasticsearch use 9300 for multicast and communicate with other clients. So, it is probably your logstsah can't talk to your elasticsearch cluster. Please check your server configuration whether the firewall has block port 9300.
Aet the protocol in elasticsearch.yml:
output {
elasticsearch { host => localhost protocol => "http" port => "9200" }
stdout { codec => rubydebug }
}