I want to launch a cluster with one machine inside my home network and other having public IP.
I configured the file /etc/default/influxdb/ by adding the following line:
INFLUXD_OPTS="-join <Public-IP>:8091"
I followed official documentation of influxdb cluster settings.
I added rules for port 8086,8091 to the security groups. I am able to do telnet to that port.
show servers
name: data_nodes
----------------
id http_addr tcp_addr
1 localhost:8086 localhost:8088
name: meta_nodes
----------------
id http_addr tcp_addr
1 localhost:8091 localhost:8088
How to launch a cluster with one machine in my home network and other machine in aws cloud having public IP?
The AWS machines cannot reach your localhost. You must use domain names that are fully resolvable by every member of the cluster. Even once that's working, a cluster connected over the public internet is likely to fail due to latency issues.
Related
I have an Elastic Beanstalk application running for several years using an RDS database (now attached to the EB itself but set up separately).
This has been running without issues for ages. There's a security group attached to the load balancer, that allows traffic on port 5432 (postgreSQL).
Now I've set up a new environment which is identical, but since I want to use Amazon Linux 2, I cannot clone the existing environment (a cloned environment works as well, BTW). So I've carefully set up everything the same way - I've verified that the SG:s are the same, that all environment properties are set, that the VPC and subnets are identical.
However, as soon as my EB instances try to call RDS, it just gets stuck and times out, producing a HTTP 504 for the calling clients.
I've used the AWS Reachability Analyzer to analyze the path from the EB's EC2 instance to the ENI used by the RDS database instance, and that came out fine - there is reachability, at least VPC and SG-wise. Still, I cannot get the database calls to work.
How would one go about to debug this? What could cause a postgresQL connection with valid parameters, from an instance which is confirmed to reach the RDS ENI, to fail for this new instance, while the existing, old, EB application still runs fine?
The only differences in configuration are (new vs original):
Node 14 on Amazon Linux 2 vs Node 10 on original Amazon Linux ("v1")
Application load balancer vs classic load balancer
Some Nginx tweaks from the old version removed as they weren't compatible nor applicable
If the path is reachable, what could cause the RDS connectivity to break, when all DB connection params are also verified to be valid?
Edit: What I've now found is that RDS is attached to subnet A, and an EB having an instance in subnet A can connect to it, but not an instance in subnet B. With old EBs and classic load balancers, a single AZ/subnet could be used, but now at least two must be chosen.
So I suspect my route tables are somehow off. What could cause a host in 10.0.1.x not to reach a host in 10.0.2.x if they're both in the same VPC comprising of these two subnets, and Reachability Analyzer thinks there is a reachable path? I cannot find anywhere that these two subnets are isolated.
check the server connection information
nslookup myexampledb.xxxx.us-east-1.rds.amazonaws.com
verify information
telnet <RDS endpoint> <port number>
nc -zv <RDS endpoint> <port number>
note: keep in mind to replace your endpoint/port to your endpoint available in database settings
I am trying to connect to Neptune DB in AWS Instance from my local machine in office, like connecting to RDS from office. Is it possible to connect Neptune db from local machine? Is Neptune db publicly available? Is there any way a developer can connect Neptune db from office?
Neptune does not support public endpoints (endpoints that are accessible from outside the VPC). However, there are few architectural options using which you can access your Neptune instance outside your VPC. All of them have the same theme: setup a proxy (EC2 machine, or ALB, or something similar, or a combination of these) that resides inside your VPC, and make that proxy accessible from outside your VPC.
It seems like you want to talk to your instance purely for development purposes. The easiest option for that would be to spin up an ALB, and create a target group that points to your instance's IP.
Brief Steps (These are intentionally not in detail, please refer to AWS Docs for detailed instructions):
dig +short <your cluster endpoint>
This would give you the current master's IP address.
Create an ALB (See AWS Docs on how to do this).
Make your ALB's target group point to the IP Address obtained for step #1. By the end of this step, you should have an ALB listening on PORT-A, that would forward requests to IP:PORT, where IP is your database IP (from Step 1) and PORT is your database port (default is 8182).
Create a security group that allows inbound traffic from everywhere. i.e. Inbound TCP rule for 0.0.0.0 on PORT-A.
Attach the security group to your ALB
Now from your developer boxes, you can connect to your ALB endpoint at PORT-A, which would internally forward the request to your Neptune instance.
Do checkout ALB docs for details around how you can create it and the concepts around it. If you need me to elaborate any of the steps, feel free to ask.
NOTE: This is not a recommended solution for a production setup. IP's used by Neptune instances are bound to change with failovers and host replacements. Use this solution only for testing purposes. If you want a similar setup for production, feel free to ask a question and we can discuss options.
As already mentioned you can't access directly outside your VPC.
The following link describes another solution using a SSH tunnel: connecting-to-aws-neptune-from-local-environment.
I find it much easier for testing and development purpose.
You can create the SSH tunnel with Putty as well.
Reference: https://github.com/M-Thirumal/aws-cloud-tutorial/blob/main/neptune/connect_from_local.md
Connect to AWS Neptune from the local system
There are many ways to connect to Amazon Neptune from outside of the VPC, such as setting up a load balancer or VPC peering.
Amazon Neptune DB clusters can only be created in an Amazon Virtual Private Cloud (VPC). One way to connect to Amazon Neptune from outside of the VPC is to set up an Amazon EC2 instance as a proxy server within the same VPC. With this approach, you will also want to set up an SSH tunnel to securely forward traffic to the VPC.
Part 1: Set up a EC2 proxy server.
Launch an Amazon EC2 instance located in the same region as your Neptune cluster. In terms of configuration, Ubuntu can be used. Since this is a proxy server, you can choose the lowest resource settings.
Make sure the EC2 instance is in the same VPC group as your Neptune cluster. To find the VPC group for your Neptune cluster, check the console under Neptune > Subnet groups. The instance's security group needs to be able to send and receive on port 22 for SSH and port 8182 for Neptune. See below for an example security group setup.
Lastly, make sure you save the key-pair file (.pem) and note the directory for use in the next step.
Part 2: Set up an SSH tunnel.
This step can vary depending on if you are running Windows or MacOS.
Modify your hosts file to map localhost to your Neptune endpoint.
Windows: Open the hosts file as an Administrator (C:\Windows\System32\drivers\etc\hosts)
MacOS: Open Terminal and type in the command: sudo nano /etc/hosts
Add the following line to the hosts file, replacing the text with your Neptune endpoint address.
127.0.0.1 localhost YourNeptuneEndpoint
Open Command Prompt as an Administrator for Windows or Terminal for MacOS and run the following command. For Windows, you may need to run SSH from C:\Users\YourUsername\
ssh -i path/to/keypairfilename.pem ec2-user#yourec2instanceendpoint -N -L 8182:YourNeptuneEndpoint:8182
The -N flag is set to prevent an interactive bash session with EC2 and to forward ports only. An initial successful connection will ask you if you want to continue connecting? Type yes and enter.
To test the success of your local graph-notebook connection to Amazon Neptune, open a browser and navigate to:
https://YourNeptuneEndpoint:8182/status
You should see a report, similar to the one below, indicating the status and details of your specific cluster:
{
"status": "healthy",
"startTime": "Wed Nov 04 23:24:44 UTC 2020",
"dbEngineVersion": "1.0.3.0.R1",
"role": "writer",
"gremlin": {
"version": "tinkerpop-3.4.3"
},
"sparql": {
"version": "sparql-1.1"
},
"labMode": {
"ObjectIndex": "disabled",
"DFEQueryEngine": "disabled",
"ReadWriteConflictDetection": "enabled"
}
}
Close Connection
When you're ready to close the connection, use Ctrl+D to exit.
Hi you can connect NeptuneDB by using gremlin console at your local machine.
USE THIS LINK to setup your local gremlin server, it works for me gremlin 3.3.2 version
Only you have to update the remote.yaml as per your url and port
I have 3 Consul Servers I have created within AWS. They were created with Terraform and are joined as part of a cluster.
There is a security group created as part of that Terraform which allowed inbound TCP/UDP on 8300, 8301, 8302, 8400, 8500.
I have installed the consul agent on a new Ubuntu 16.04 instance.
I collect the private IP of one of the Consul servers and try to join it from the client:
consul agent -join 172.1.1.1:8301 -data-dir /tmp/consul
Result:
==> Starting Consul agent...
==> Joining cluster...
==> 1 error(s) occurred:
* Failed to join 172.1.1.1: dial tcp 172.1.1.1:8301: i/o timeout
I can't see what is missing here that is stopping the client from joining.
Not enough data in the question. What do you mean you collected the private IP, was it the server's private IP assigned by the subnet, or is the IP you listed actually a "TaggedAddresses" from the consul itself, which is created if you are not running consul on the host network. So clearly, you need to share some of your consul server configuration too.
Secondly, if it the server's private IP only, please make sure that there is no issue in the NACL or ephemeral ports. You will find more information on the following link from amazon's official documentation:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html#VPC_ACLs_Ephemeral_Ports
I am new to zookeeper and aws EC2. I am trying to install zookeeper on 3 ec2 instances.
as per zookeeper document, I have installed zookeeper on all 3 instances, created zoo.conf and add below configuration:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
clientPort=2181
server.1=localhost:2888:3888
server.2=<public ip of ec2 instance 2>:2889:3889
server.3=<public ip of ec2 instance 3>:2890:3890
also I have created myid file on all 3 instances as /opt/zookeeper/data/myid
as per guideline..
I have couple of queries as below:
whenever I am starting zookeeper server on each instance, it will start in standalone mode.(as per logs)
can above configuration is really gonna connect to each other? port 2889:3889 & 2890:38900 - what these port all about. can I need to configure it on ec2 machine or I need to give some other port against it?
Is I need to create security group to open these connection? I am not sure how to do it in ec2 instance.
How to confirm all 3 zookeeper has started and they can communicate with each other?
The ZooKeeper configuration is designed such that you can install the exact same configuration file on all servers in the cluster without modification. This makes ops a bit simpler. The component that specifies the configuration for the local node is the myid file.
The configuration you've defined is not one that can be shared across all servers. All of the servers in your server list should be binding to a private IP address that is accessible to other nodes in the network. You're seeing your server start in standalone mode because you're binding to localhost. So, the problem is the other servers in the cluster can't see localhost.
Your configuration should look more like:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
clientPort=2181
server.1=<private ip of ec2 instance 1>:2888:3888
server.2=<private ip of ec2 instance 2>:2888:3888
server.3=<private ip of ec2 instance 3>:2888:3888
The two ports listed in each server definition are respectively the quorum and election ports used by ZooKeeper nodes to communicate with one another internally. There's usually no need to modify these ports, and you should try to keep them the same across servers for consistency.
Additionally, as I said you should be able to share that exact same configuration file across all instances. The only thing that should have to change is the myid file.
You probably will need to create a security group and open up the client port to be available for clients and the quorum/election ports to be accessible by other ZooKeeper servers.
Finally, you might want to look in to a UI to help manage the cluster. Netflix makes a decent UI that will give you a view of your cluster and also help with cleaning up old logs and storing snapshots to S3 (ZooKeeper takes snapshots but does not delete old transaction logs, so your disk will eventually fill up if they're not properly removed). But once it's configured correctly, you should be able to see the ZooKeeper servers connecting to each other in the logs as well.
EDIT
#czerasz notes that starting from version 3.4.0 you can use the autopurge.snapRetainCount and autopurge.purgeInterval directives to keep your snapshots clean.
#chomp notes that some users have had to use 0.0.0.0 for the local server IP to get the ZooKeeper configuration to work on EC2. In other words, replace <private ip of ec2 instance 1> with 0.0.0.0 in the configuration file on instance 1. This is counter to the way ZooKeeper configuration files are designed but may be necessary on EC2.
Adding additional info regarding Zookeeper clustering inside Amazon's VPC.
Solution with VPC's public IP addres should be preferable solution since Zookeeper and using '0.0.0.0' should be your last option.
In case when you are using docker in your EC2 instance '0.0.0.0' will not work properly with Zookeeper 3.5.X after node restart.
The issue lies in resolving '0.0.0.0' and ensemble sharing of node addresses and SID order (if you will start your nodes in descending order, this issue may not occur).
So far the only working solution is to upgrade to 3.6.2+ version.
I have 2 VMs on AWS. On the first VM I have hornet and application that send messages to hornet. On another VM I have application that is a consumer of hornet.
The consumer fails to pull messages from hornet, and I can't understand why. Hornetq is running, I opened to ports to any IP.
I tried to connect hornet with jconsole (on my local computer) and failed, so I can't see if the hornet has any consumers/ suppliers.
I've tried to change 'bind' configurations to 0.0.0.0 but when I restarted hornet they were automatically changed to what I have as server IP in config.properties.
Any suggestions what might be the problem that I failed to connect my application to the hornetq?
Thanks!
These are the things you need to check for the connectivity between VMs in VPC.
The Security- Group of the instance has both Ingress-Egress Configuration settings unlike the traditional EC2 Security Group [ now Classic EC2 ]. Check the Egress from your Consumer and ingress to the Server
If the instances are in different Subnets you need to check for the ACL as well; however the default setting would be allow.
Check if the iptables / OS level firewall which are blocking.
With respect to the connectivity failed from your local machine to Hornetq - you need to place the Instance in Public sub and configure the Instance's SG accordingly; only the app / VM would accessible to public internet
I have assumed that both the instances are in the Same VPC. However the title of the post sounds slightly misleading - if it is 2 different VPCs altogether, then new concept of VPC Peering also comes in