How to connect hornetq on AWS VPC from another vm on AWS - amazon-web-services

I have 2 VMs on AWS. On the first VM I have hornet and application that send messages to hornet. On another VM I have application that is a consumer of hornet.
The consumer fails to pull messages from hornet, and I can't understand why. Hornetq is running, I opened to ports to any IP.
I tried to connect hornet with jconsole (on my local computer) and failed, so I can't see if the hornet has any consumers/ suppliers.
I've tried to change 'bind' configurations to 0.0.0.0 but when I restarted hornet they were automatically changed to what I have as server IP in config.properties.
Any suggestions what might be the problem that I failed to connect my application to the hornetq?
Thanks!

These are the things you need to check for the connectivity between VMs in VPC.
The Security- Group of the instance has both Ingress-Egress Configuration settings unlike the traditional EC2 Security Group [ now Classic EC2 ]. Check the Egress from your Consumer and ingress to the Server
If the instances are in different Subnets you need to check for the ACL as well; however the default setting would be allow.
Check if the iptables / OS level firewall which are blocking.
With respect to the connectivity failed from your local machine to Hornetq - you need to place the Instance in Public sub and configure the Instance's SG accordingly; only the app / VM would accessible to public internet
I have assumed that both the instances are in the Same VPC. However the title of the post sounds slightly misleading - if it is 2 different VPCs altogether, then new concept of VPC Peering also comes in

Related

How to debug RDS connectivity

I have an Elastic Beanstalk application running for several years using an RDS database (now attached to the EB itself but set up separately).
This has been running without issues for ages. There's a security group attached to the load balancer, that allows traffic on port 5432 (postgreSQL).
Now I've set up a new environment which is identical, but since I want to use Amazon Linux 2, I cannot clone the existing environment (a cloned environment works as well, BTW). So I've carefully set up everything the same way - I've verified that the SG:s are the same, that all environment properties are set, that the VPC and subnets are identical.
However, as soon as my EB instances try to call RDS, it just gets stuck and times out, producing a HTTP 504 for the calling clients.
I've used the AWS Reachability Analyzer to analyze the path from the EB's EC2 instance to the ENI used by the RDS database instance, and that came out fine - there is reachability, at least VPC and SG-wise. Still, I cannot get the database calls to work.
How would one go about to debug this? What could cause a postgresQL connection with valid parameters, from an instance which is confirmed to reach the RDS ENI, to fail for this new instance, while the existing, old, EB application still runs fine?
The only differences in configuration are (new vs original):
Node 14 on Amazon Linux 2 vs Node 10 on original Amazon Linux ("v1")
Application load balancer vs classic load balancer
Some Nginx tweaks from the old version removed as they weren't compatible nor applicable
If the path is reachable, what could cause the RDS connectivity to break, when all DB connection params are also verified to be valid?
Edit: What I've now found is that RDS is attached to subnet A, and an EB having an instance in subnet A can connect to it, but not an instance in subnet B. With old EBs and classic load balancers, a single AZ/subnet could be used, but now at least two must be chosen.
So I suspect my route tables are somehow off. What could cause a host in 10.0.1.x not to reach a host in 10.0.2.x if they're both in the same VPC comprising of these two subnets, and Reachability Analyzer thinks there is a reachable path? I cannot find anywhere that these two subnets are isolated.
check the server connection information
nslookup myexampledb.xxxx.us-east-1.rds.amazonaws.com
verify information
telnet <RDS endpoint> <port number>
nc -zv <RDS endpoint> <port number>
note: keep in mind to replace your endpoint/port to your endpoint available in database settings

Why is communication from GKE to a private ip in GCP not working?

I have what I think is a reasonably straightforward setup in Google Cloud - A GKE cluster, a Cloud SQL instance, and a "Click-To-Deploy" Kafka VM instance.
All of the resources are in the same VPC, with firewall rules to allow all traffic to the internal VPC CIDR blocks.
The pods in the GKE cluster have no problem accessing the Cloud SQL instance via its private IP address. But they can't seem to access the Kafka instance via its private IP address:
# kafkacat -L -b 10.1.100.2
% ERROR: Failed to acquire metadata: Local: Broker transport failure
I've launched another VM manually into the VPC, and it has no problem connecting to the Kafka instance:
# kafkacat -L -b 10.1.100.2
Metadata for all topics (from broker -1: 10.1.100.2:9092/bootstrap):
1 brokers:
broker 0 at ....us-east1-b.c.....internal:9092
1 topics:
topic "notifications" with 1 partitions:
partition 0, leader 0, replicas: 0, isrs: 0
I can't seem to see any real difference in the networking between the containers in GKE and the manually launched VM, especially since both can access the Cloud SQL instance at 10.10.0.3.
Where do I go looking for what's blocking the connection?
I have seen that the error is relate to the network,
however if you are using gke on the same VPC network, you will ensure to configure properly the Internal Load Balancer, also I saw that this product or feature is BETA version, this means that it is not yet guaranteed to work as expected, another suggestion is that you ensure that you are not using any policy, that maybe block the connection, I found the next article on the community that maybe help you to solve it
This gave me what I needed: https://serverfault.com/a/924317
The networking rules in GCP still seem wonky to me coming from a long time working with AWS. I had rules that allowed anything in the VPC CIDR blocks to contact anything else in those same CIDR blocks, but that wasn't enough. Explicitly adding the worker nodes subnet as a source for a new rule opened it up.

Connect to Neptune on AWS from local machine

I am trying to connect to Neptune DB in AWS Instance from my local machine in office, like connecting to RDS from office. Is it possible to connect Neptune db from local machine? Is Neptune db publicly available? Is there any way a developer can connect Neptune db from office?
Neptune does not support public endpoints (endpoints that are accessible from outside the VPC). However, there are few architectural options using which you can access your Neptune instance outside your VPC. All of them have the same theme: setup a proxy (EC2 machine, or ALB, or something similar, or a combination of these) that resides inside your VPC, and make that proxy accessible from outside your VPC.
It seems like you want to talk to your instance purely for development purposes. The easiest option for that would be to spin up an ALB, and create a target group that points to your instance's IP.
Brief Steps (These are intentionally not in detail, please refer to AWS Docs for detailed instructions):
dig +short <your cluster endpoint>
This would give you the current master's IP address.
Create an ALB (See AWS Docs on how to do this).
Make your ALB's target group point to the IP Address obtained for step #1. By the end of this step, you should have an ALB listening on PORT-A, that would forward requests to IP:PORT, where IP is your database IP (from Step 1) and PORT is your database port (default is 8182).
Create a security group that allows inbound traffic from everywhere. i.e. Inbound TCP rule for 0.0.0.0 on PORT-A.
Attach the security group to your ALB
Now from your developer boxes, you can connect to your ALB endpoint at PORT-A, which would internally forward the request to your Neptune instance.
Do checkout ALB docs for details around how you can create it and the concepts around it. If you need me to elaborate any of the steps, feel free to ask.
NOTE: This is not a recommended solution for a production setup. IP's used by Neptune instances are bound to change with failovers and host replacements. Use this solution only for testing purposes. If you want a similar setup for production, feel free to ask a question and we can discuss options.
As already mentioned you can't access directly outside your VPC.
The following link describes another solution using a SSH tunnel: connecting-to-aws-neptune-from-local-environment.
I find it much easier for testing and development purpose.
You can create the SSH tunnel with Putty as well.
Reference: https://github.com/M-Thirumal/aws-cloud-tutorial/blob/main/neptune/connect_from_local.md
Connect to AWS Neptune from the local system
There are many ways to connect to Amazon Neptune from outside of the VPC, such as setting up a load balancer or VPC peering.
Amazon Neptune DB clusters can only be created in an Amazon Virtual Private Cloud (VPC). One way to connect to Amazon Neptune from outside of the VPC is to set up an Amazon EC2 instance as a proxy server within the same VPC. With this approach, you will also want to set up an SSH tunnel to securely forward traffic to the VPC.
Part 1: Set up a EC2 proxy server.
Launch an Amazon EC2 instance located in the same region as your Neptune cluster. In terms of configuration, Ubuntu can be used. Since this is a proxy server, you can choose the lowest resource settings.
Make sure the EC2 instance is in the same VPC group as your Neptune cluster. To find the VPC group for your Neptune cluster, check the console under Neptune > Subnet groups. The instance's security group needs to be able to send and receive on port 22 for SSH and port 8182 for Neptune. See below for an example security group setup.
Lastly, make sure you save the key-pair file (.pem) and note the directory for use in the next step.
Part 2: Set up an SSH tunnel.
This step can vary depending on if you are running Windows or MacOS.
Modify your hosts file to map localhost to your Neptune endpoint.
Windows: Open the hosts file as an Administrator (C:\Windows\System32\drivers\etc\hosts)
MacOS: Open Terminal and type in the command: sudo nano /etc/hosts
Add the following line to the hosts file, replacing the text with your Neptune endpoint address.
127.0.0.1 localhost YourNeptuneEndpoint
Open Command Prompt as an Administrator for Windows or Terminal for MacOS and run the following command. For Windows, you may need to run SSH from C:\Users\YourUsername\
ssh -i path/to/keypairfilename.pem ec2-user#yourec2instanceendpoint -N -L 8182:YourNeptuneEndpoint:8182
The -N flag is set to prevent an interactive bash session with EC2 and to forward ports only. An initial successful connection will ask you if you want to continue connecting? Type yes and enter.
To test the success of your local graph-notebook connection to Amazon Neptune, open a browser and navigate to:
https://YourNeptuneEndpoint:8182/status
You should see a report, similar to the one below, indicating the status and details of your specific cluster:
{
"status": "healthy",
"startTime": "Wed Nov 04 23:24:44 UTC 2020",
"dbEngineVersion": "1.0.3.0.R1",
"role": "writer",
"gremlin": {
"version": "tinkerpop-3.4.3"
},
"sparql": {
"version": "sparql-1.1"
},
"labMode": {
"ObjectIndex": "disabled",
"DFEQueryEngine": "disabled",
"ReadWriteConflictDetection": "enabled"
}
}
Close Connection
When you're ready to close the connection, use Ctrl+D to exit.
Hi you can connect NeptuneDB by using gremlin console at your local machine.
USE THIS LINK to setup your local gremlin server, it works for me gremlin 3.3.2 version
Only you have to update the remote.yaml as per your url and port

AWS Neptune Host did not respond in a timely fashion - check the server status and submit again

Ive went through the whole start-up tutorial and connect to the tinkerpop3 server remotely from an EC2 that is in the same VPC and get the error
gremlin> g.addV('person').property(id, '1').property('name', 'marko')
Host did not respond in a timely fashion - check the server status and submit ag ain.
Type ':help' or ':h' for help.
Display stack trace? [yN]
any reason this might be happening?
Let's try a couple of things to get you started with debugging the issue here:
Have you tried hitting the /status endpoint? If this endpoint is working, then there is a problem with the console configuration. If it isn't, then there is an issue with the connectivity of the EC2 instance to the DB.
Can you ensure that the EC2 instance has been launched with the same security group for which you gave inbound access to port 8182 on the DB (during step#8 in the setting up instructions?
Please ensure that your cluster and instance status is "available" as observed from the Neptune console.
The recommended way to manage such connections is 2 have 2 security groups:
client - A security group that you attach to all clients, like Lambdas, EC2 instances etc. The default outbound rule gives you outbound access to every resource in the VPC. You can tighten that if you'd like.
db - A security group that you should attach to your Neptune cluster. In this security group, edit hte inbound rules, and explicitly add a TCP rule that allows inbound connections to your database port (8182 is the default port).
You can attach the db security group to your cluster either during creation or by modifying existing clusters.

zookeeper installation on multiple AWS EC2instances

I am new to zookeeper and aws EC2. I am trying to install zookeeper on 3 ec2 instances.
as per zookeeper document, I have installed zookeeper on all 3 instances, created zoo.conf and add below configuration:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
clientPort=2181
server.1=localhost:2888:3888
server.2=<public ip of ec2 instance 2>:2889:3889
server.3=<public ip of ec2 instance 3>:2890:3890
also I have created myid file on all 3 instances as /opt/zookeeper/data/myid
as per guideline..
I have couple of queries as below:
whenever I am starting zookeeper server on each instance, it will start in standalone mode.(as per logs)
can above configuration is really gonna connect to each other? port 2889:3889 & 2890:38900 - what these port all about. can I need to configure it on ec2 machine or I need to give some other port against it?
Is I need to create security group to open these connection? I am not sure how to do it in ec2 instance.
How to confirm all 3 zookeeper has started and they can communicate with each other?
The ZooKeeper configuration is designed such that you can install the exact same configuration file on all servers in the cluster without modification. This makes ops a bit simpler. The component that specifies the configuration for the local node is the myid file.
The configuration you've defined is not one that can be shared across all servers. All of the servers in your server list should be binding to a private IP address that is accessible to other nodes in the network. You're seeing your server start in standalone mode because you're binding to localhost. So, the problem is the other servers in the cluster can't see localhost.
Your configuration should look more like:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
clientPort=2181
server.1=<private ip of ec2 instance 1>:2888:3888
server.2=<private ip of ec2 instance 2>:2888:3888
server.3=<private ip of ec2 instance 3>:2888:3888
The two ports listed in each server definition are respectively the quorum and election ports used by ZooKeeper nodes to communicate with one another internally. There's usually no need to modify these ports, and you should try to keep them the same across servers for consistency.
Additionally, as I said you should be able to share that exact same configuration file across all instances. The only thing that should have to change is the myid file.
You probably will need to create a security group and open up the client port to be available for clients and the quorum/election ports to be accessible by other ZooKeeper servers.
Finally, you might want to look in to a UI to help manage the cluster. Netflix makes a decent UI that will give you a view of your cluster and also help with cleaning up old logs and storing snapshots to S3 (ZooKeeper takes snapshots but does not delete old transaction logs, so your disk will eventually fill up if they're not properly removed). But once it's configured correctly, you should be able to see the ZooKeeper servers connecting to each other in the logs as well.
EDIT
#czerasz notes that starting from version 3.4.0 you can use the autopurge.snapRetainCount and autopurge.purgeInterval directives to keep your snapshots clean.
#chomp notes that some users have had to use 0.0.0.0 for the local server IP to get the ZooKeeper configuration to work on EC2. In other words, replace <private ip of ec2 instance 1> with 0.0.0.0 in the configuration file on instance 1. This is counter to the way ZooKeeper configuration files are designed but may be necessary on EC2.
Adding additional info regarding Zookeeper clustering inside Amazon's VPC.
Solution with VPC's public IP addres should be preferable solution since Zookeeper and using '0.0.0.0' should be your last option.
In case when you are using docker in your EC2 instance '0.0.0.0' will not work properly with Zookeeper 3.5.X after node restart.
The issue lies in resolving '0.0.0.0' and ensemble sharing of node addresses and SID order (if you will start your nodes in descending order, this issue may not occur).
So far the only working solution is to upgrade to 3.6.2+ version.