Traffic control Filter is not working properly with iperf - scheduling

I am trying to configure the (PRIO) scheduler with the use of Traffic control (TC) in Mininet. i also want to set 3 different priorities.
After creating the network on Mininet. i delete the default scheduler and add my scheduler using the Shell command:
(tc qdisc add dev "ethNAME" root handle 1: prio)
Afterwards i add three different filters:
(tc filter add dev "ethNAME" protocol ip parent 1: prio 1 u32 match ip src "host1 IP Address" flowid 1:1)
(tc filter add dev "ethNAME" protocol ip parent 2: prio 2 u32 match ip src "host2 IP Address" flowid 1:2)
(tc filter add dev "ethNAME" protocol ip parent 1: prio 3 u32 match ip src "host3 IP Address" flowid 1:3)
as seen in the picture i added the scheduler and the 3 filters for prioritizing the packets.
When i am testing by sending UDP packets from all three hosts to one server using iperf. the results show no prioritzing at all.
what am i doing wrong ?
thank you

Related

Kafka Multi broker setup with ec2 machine: Timed out waiting for a node assignment. Call: createTopics

I am trying to setup kafka with 3 broker nodes and 1 zookeeper node in AWS EC2 instances. I have following server.properties for every broker:
kafka-1:
broker.id=0
listeners=PLAINTEXT_1://ec2-**-***-**-17.eu-central-1.compute.amazonaws.com:9092
advertised.listeners=PLAINTEXT_1://ec2-**-***-**-17.eu-central-1.compute.amazonaws.com:9092
listener.security.protocol.map=,PLAINTEXT_1:PLAINTEXT
inter.broker.listener.name=PLAINTEXT_1
zookeeper.connect=ec2-**-***-**-105.eu-central-1.compute.amazonaws.com:2181
kafka-2:
broker.id=1
listeners=PLAINTEXT_2://ec2-**-***-**-43.eu-central-1.compute.amazonaws.com:9093
advertised.listeners=PLAINTEXT_2://ec2-**-***-**-43.eu-central-1.compute.amazonaws.com:9093
listener.security.protocol.map=,PLAINTEXT_2:PLAINTEXT
inter.broker.listener.name=PLAINTEXT_2
zookeeper.connect=ec2-**-***-**-105.eu-central-1.compute.amazonaws.com:2181
kafka-3:
broker.id=2
listeners=PLAINTEXT_3://ec2-**-***-**-27.eu-central-1.compute.amazonaws.com:9094
advertised.listeners=PLAINTEXT_3://ec2-**-***-**-27.eu-central-1.compute.amazonaws.com:9094
listener.security.protocol.map=,PLAINTEXT_3:PLAINTEXT
inter.broker.listener.name=PLAINTEXT_3
zookeeper.connect=ec2-**-***-**-105.eu-central-1.compute.amazonaws.com:2181
zookeeper:
tickTime=2000
dataDir=/var/lib/zookeeper
clientPort=2181
When I ran following command in zookeeper I see that they are connected
I also telnetted from any broker to other ones with broker port they are all connected
However, when I try to create topic with 2 replication factor I get Timed out waiting for a node assignment
I cannot understand what is incorrect with my setup, I see 3 nodes running in zookeeper, but having problems when creating topic. BTW, when I make replication factor 1 I get the same error. How can I make sure that everything is alright with my cluster?
It's good that telnet checks the port is open, but it doesn't verify the Kafka protocol works. You could use kcat utility for that, but the fix includes
listeners are set to either PLAINTEXT://:9092 or PLAINTEXT://0.0.0.0:9092 for every broker, which means using the same port
Removing the number from the listener mapping and advertised listeners property such that each broker is the same
I'd also recommend looking at using Ansible/Terraform/Cloudformation to ensure you consistently modify the cluster rather than edit individual settings manually

wso2am-analytics 2.2.0 spark on offset 0

Installing wso2am-analytics-2.2.0 on the port offset 0, then I get error messages as
WARN {org.apache.spark.scheduler.TaskSetManager} - Lost task 0.0 in stage 2990.0 (TID 147439, 10.0.11.26): FetchFailed(BlockManagerId(0, someserver.compute.internal, 12001), shuffleId=745, mapId=0, reduceId=0, message=
org.apache.spark.shuffle.FetchFailedException: Failed to connect to ip-10-0-17-131.eu-central-1.compute.internal:12001
Apparently somewhere is configured to connect to port 12001 (while seems the server listens on 12000)
Where could I configure the port 12000?
Thanks
This port is defined in <Product_Home>repository/conf/analytics/spark/spark-defaults.conf. Property name is spark.blockManager.port. However you shouldn't manually configure it.
This particular issue is a connectivity problem in my knowledge. DAS uses 1200x range ports to spark executor communications. So incase of multiple executors or new executor spawning in and event of one executor getting killed incremented port will be opened. Hence at the network level also we should allow traffic through that port range. So opening that port range in your network interface ip-10-0-17-131.eu-central-1.compute.internal will solve your issue.

EC2 Security Group not connecting to my IP

Seems like a basic job, but for some reason it is not working for me. I wish to access my EC2 instances from my office IP only.
I went into my security group and added an SSH rule with source for my IP only like this -
But this does not seems to be working for me at all. I get connection denied when I try to connect via WinSCP or by using terminal.
Everything works if I change my source to Everywhere (0.0.0.0/0)
Anyone has any pointer for me please.
Login to the EC2 using the method that works and issue the command
who am i
It will say something like
ec2-user pts/0 2016-02-29 15:06 (104.66.242.192)
Use the ip address shown for you (not the one above) in the security group rule
Although "who am i" work fine. However I'd like to add two more solutions.
both are very easy.
Solution 1:
Step 1: Open security group for all IP's (0.0.0.0/0) for a while.
Step 2: Make ssh connection to your server.
Step 3: run "w" command and check the output in FROM column.
ubuntu#ip-172-31-39-228:~$ w
23:20:09 up 5 min, 1 user, load average: 0.08, 0.08, 0.04
USER TTY FROM LOGIN# IDLE JCPU PCPU WHAT
ubuntu pts/0 52.95.75.17 23:20 0.00s 0.01s 0.00s w
Step 4: Replace this IP in the security group with 0.0.0.0/0 ( like 52.95.75.17/32 ).
Solution 2:
Step 1: Open security group for all IP's (0.0.0.0/0) for a while.
Step 2: Make ssh connection to your server.
Step 3: Check the last login info on welcome message.
like :
Learn more about enabling ESM Apps service at https://ubuntu.com/esm
Last login: Thu Feb 9 23:21:42 2023 from 52.95.75.17
ubuntu#ip-172-31-39-228:~$
ubuntu#ip-172-31-39-228:~$
Step 4 ( optional ): If IP address not available in welcome message. Then run "last" command.
ubuntu#ip-172-31-39-228:~$
ubuntu#ip-172-31-39-228:~$ last
ubuntu pts/2 52.95.75.17 Thu Feb 9 23:33 still logged in
ubuntu pts/1 52.95.75.17 Thu Feb 9 23:21 still logged in
Step 5: Replace this IP in the security group with 0.0.0.0/0 ( like 52.95.75.17/32 ).
Check below screenshot for reference of above solutions:
Feel free to use my powershell script for this .
The script detects your public ip and adds it to the inbound security group rules of dedicated RDP and SSH security groups .
If these groups do not exist , the script will create them and add it to the appropriate instances .
https://github.com/manuelh2410/public/blob/1/AWSIP_Linux_Win.ps1

KeepAlived + HAProxy gets connection refused after a while

I´ve the next scenario, 4 VM´s running Red Hat Enterprise Linux 7:
20.1.67.230 server (VIRTUAL IP) (not a host)
20.1.67.219 haproxy1 (LOAD BALANCER)
20.1.67.229 haproxy2 (LOAD BALANCER)
20.1.67.223 server1 (LOAD TO BALANCE)
20.1.67.213 server2 (LOAD TO BALANCE)
My keepalived.conf file is:
vrrp_script chk_haproxy {
script "killall -0 haproxy" # check the haproxy process
interval 2 # every 2 seconds
weight 2 # add 2 points if OK
}
vrrp_instance VI_1 {
interface enp0s3 # interface to monitor
state MASTER# MASTER on haproxy1, BACKUP on haproxy2
virtual_router_id 51
priority 101 # 101 on haproxy1, 100 on haproxy2
unicast_src_ip 20.1.67.229 # This is the IP of the interface keepalived listens on
unicast_peer { # This is the IP of the peer instance
20.1.67.219
}
virtual_ipaddress {
20.1.67.230 # virtual ip address
}
track_script {
chk_haproxy
}
}
When a execute a request to the VIRTUAL IP, for instance:
curl server:8888/info
everything is ok, but just for a while, after some requests the command returns me : connection refused
So I´ve to restart the keepalived service manually , this way:
systemctl restart keepalived.service
The whole system seems work well, VRRP messages between haproxy1 and haproxy2 are OK, it´s just like the Virtual IP is not working properly.
Can anyone point me in the right direction to diagnose and fix this problem?
It was a networking issue. There was a device on the net with same IP as the Virtual IP I had chosen.

Forward EC2 traffic from 1 instance to another?

I set up countly analytics on the free tier AWS EC2, but stupidly did not set up an elastic IP with it. No, the traffic it too great that I can't even log into the analytics as the CPU is constantly running at 100%.
I am in the process of issuing app updates to change the analytics address to a private domain that forwards to the EC2 instance, so I can change the forwarding in future.
In the mean time, is it possible for me to set up a 2nd instance and forward all the traffic from the current one to the new one?
I found this http://lastzactionhero.wordpress.com/2012/10/26/remote-port-forwarding-from-ec2/ will this work from 1 EC2 instance to another?
Thanks
EDIT ---
Countly log
/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:529
throw err;
^ ReferenceError: liveApi is not defined
at processUserSession (/home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:203:17)
at /home/ubuntu/countlyinstall/countly/api/parts/data/usage.js:32:13
at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/collection.js:1010:5
at Cursor.nextObject (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:653:5)
at commandHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/cursor.js:635:14)
at null. (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/db.js:1709:18)
at g (events.js:175:14)
at EventEmitter.emit (events.js:106:17)
at Server.Base._callHandler (/home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/base.js:130:25)
at /home/ubuntu/countlyinstall/countly/api/node_modules/mongoskin/node_modules/mongodb/lib/mongodb/connection/server.js:522:20
You can follow the steps described in the blog post to do the port forwarding. Just make sure not to forward it to localhost :)
Also about 100% CPU, it is probably caused by MongoDB. Did you have a chance to check the process? In case it is mongod, issue mongotop command to see the most time consuming collection accesses. We can go from there.
Yes. It is possible. I use ngnix with Node JS app. I wanted to redirect traffic from one instance to another. Instance was in different region and not configured in same VPC as mentioned in AWS documentation.
Step 1: Go to /etc/ngnix/site-enabled and open default.conf file. Your configuration might be on different file.
Step 2: Change proxy_pass to your chosen IP/domain/sub-domain
server
{
listen 80
server_name your_domain.com;
location / {
...
proxy_pass your_ip; // You can put domain, sub-domain with protocol (http/https)
}
}
Step 3: then restart the ngnix
sudo systemctl restart nginx
This can be possible for any external instances and different VPC instances.