I have a simulation where two modules UDPBasicApp (a client and a server) are connected together via an Ethernet link. Instead, I want that they be connected together through a wireless channel. The network is defined by the following NED file:
package udpbasic;
import inet.networklayer.autorouting.ipv4.IPv4NetworkConfigurator;
import inet.nodes.ethernet.Eth10M;
import inet.nodes.inet.StandardHost;
network ClientServer
{
#display("bgb=380,247");
submodules:
client: StandardHost
{
#display("p=84,100");
}
server: StandardHost
{
#display("p=278,100");
}
configurator: IPv4NetworkConfigurator
{
#display("p=181,188");
}
connections:
client.ethg++ <--> Eth10M <--> server.ethg++;
}
I know that I have to change the line
client.ethg++ <--> Eth10M <--> server.ethg++;
where the Ethernet link is defined. Can I connect the client and the server trough
a wireless link? Obviously, I am looking for the most basic configuration.
I am new in OMNeT++ and INET; I have searched the INET API reference, and it doesn't
help so much. I would thank any suggestion.
I recommend reading the wireless tutorial in INET 3.0.
https://github.com/inet-framework/inet/blob/master/tutorials/wireless/omnetpp.ini
Ini file:
[General]
# Some global configuration to make the model simpler
# At this point you should take a look at the NED files corresponding to this Ini file.
# They are rather simple. The only interesting thing is that they are using parametrized types
# (i.e. like) so we will be able to change the type of the different modules from the Ini file.
# This allows us go through the tutorial only by changing parameter values in this file.
# Limit the simulation to 25s
sim-time-limit = 25s
# Let's configure ARP
# ARP in the real world is used to figure out the MAC address of a node from its IPv4 address.
# We do not want to use it in this wireless tutorial as it just adds some uninteresting
# message exchanges before the real communication between the nodes can start. We will use
# the GlobalARP module instead that can automatically provide all the MAC-IP assocoations
# for the nodes out of band.
**.arpType = "GlobalARP"
# Now we are ready to jump into the tutorial
[Config Wireless01]
description = Two nodes communicating via UDP
network = WirelessA
# Configure an application for hostA that sends a constant
# UDP traffic around 800Kbps (+ protocol overhead)
*.hostA.numUdpApps = 1
*.hostA.udpApp[0].typename = "UDPBasicApp"
*.hostA.udpApp[0].destAddresses = "hostB"
*.hostA.udpApp[0].destPort = 5000
*.hostA.udpApp[0].messageLength = 1000B
*.hostA.udpApp[0].sendInterval = exponential(10ms)
# Configure an app that receives the USP traffic (and simply drops the data)
*.hostB.numUdpApps = 1
*.hostB.udpApp[0].typename = "UDPSink"
*.hostB.udpApp[0].localPort = 5000
# Configure the hosts to have a single "ideal" wireless NIC. An IdealWirelessNic
# can be configured with a maximum communication range. All packets withing range
# are always received successfully while out of range messages are never received.
# This is useful if we are not interested how the actual messages get to their destination,
# we just want to be sure that they get there once the nodes are in range.
*.host*.wlan[*].typename = "IdealWirelessNic"
# All radios and MACs should run on 1Mbps in our examples
**.bitrate = 1Mbps
# Mandatory physical layer parameters
*.host*.wlan[*].radio.transmitter.maxCommunicationRange = 500m
# Simplify the IdealWirelessNic even further. We do not care even if there are
# transmission collisions. Any number of nodes in range can transmit at the same time
# and the packets will be still successfully delivered.
*.host*.wlan[*].radio.receiver.ignoreInterference = true
# Result: HostA can send data to hostB using almost the whole 1Mbps bandwidth.
Corresponding NED file:
package inet.tutorials.wireless;
import inet.networklayer.configurator.ipv4.IPv4NetworkConfigurator;
import inet.node.inet.INetworkNode;
import inet.physicallayer.contract.packetlevel.IRadioMedium;
// - create a network and specify the size to 500x500
// - drop an IPv4NetworkConfigurator and rename it to "configurator"
// - drop an IdealRadioMedium module and rename to "radioMedium"
// - drop two standardhosts at the 100,100 and 400,400 position and
// rename them to hostA and hostB
network WirelessA
{
#display("bgb=500,500");
#figure[thruputInstrument](type=gauge; pos=370,90; size=120,120; maxValue=2500; tickSize=500; colorStrip=green 0.75 yellow 0.9 red;label=Number of packets received; moduleName=hostB.udpApp[0]; signalName=rcvdPk);
string hostType = default("WirelessHost");
string mediumType = default("IdealRadioMedium");
submodules:
configurator: IPv4NetworkConfigurator {
#display("p=149,29");
}
radioMedium: <mediumType> like IRadioMedium {
#display("p=309,24");
}
hostA: <hostType> like INetworkNode {
#display("p=50,250");
}
hostB: <hostType> like INetworkNode {
#display("p=450,250");
}
}
Related
I have a C++ program which does lots of stuff, but most importantly it is setup to use F-Stack, which is built on DPDK:
int main(int argc, char * argv[])
{
ff_init(argc, argv);
...
}
And I run the program like this:
sudo ./main --conf /etc/f-stack.conf --proc-type=primary
This is the error message I am receiving:
virtio_dev_configure(): Unsupported Rx multi queue mode 1
Port0 dev_configure = -22
EAL: Error - exiting with code: 1
Cause: init_port_start failed
I have not had this problem before when running this executable on a Centos 8 AWS instance. I am now running this on a Centos 8 Alibaba Cloud instance. So there's possibly some difference when running on Alibaba.
The only other thing I can think of is that there might be a configuration problem. However, I copied /etc/f-stack.conf from my AWS instance to Alibaba and updated some IP addresses, nothing else. So nothing significant has changed.
Any idea what's going on here and how to fix it?
Edit: here is my /etc/f-stack.conf file (without IP addresses included):
[dpdk]
# Hexadecimal bitmask of cores to run on.
lcore_mask=1
# Number of memory channels.
channel=4
# Specify base virtual address to map.
#base_virtaddr=0x7f0000000000
# Promiscuous mode of nic, defualt: enabled.
promiscuous=1
numa_on=1
# TX checksum offload skip, default: disabled.
# We need this switch enabled in the following cases:
# -> The application want to enforce wrong checksum for testing purposes
# -> Some cards advertize the offload capability. However, doesn't calculate che cksum.
tx_csum_offoad_skip=0
# TCP segment offload, default: disabled.
tso=0
# HW vlan strip, default: enabled.
vlan_strip=1
# sleep when no pkts incoming
# unit: microseconds
idle_sleep=0
# sent packet delay time(0-100) while send less than 32 pkts.
# default 100 us.
# if set 0, means send pkts immediately.
# if set >100, will dealy 100 us.
# unit: microseconds
pkt_tx_delay=100
# use symmetric Receive-side Scaling(RSS) key, default: disabled.
symmetric_rss=0
# PCI device enable list.
# And driver options
#pci_whitelist=02:00.0
# enabled port list
#
# EBNF grammar:
#
# exp ::= num_list {"," num_list}
# num_list ::= <num> | <range>
# range ::= <num>"-"<num>
# num ::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9'
#
# examples
# 0-3 ports 0, 1,2,3 are enabled
# 1-3,4,7 ports 1,2,3,4,7 are enabled
#
# If use bonding, shoule config the bonding port id in port_list
# and not config slave port id in port_list
# such as, port 0 and port 1 trank to a bonding port 2,
# should set `port_list=2` and config `[port2]` section
port_list=0
# Number of vdev.
nb_vdev=0
# Number of bond.
nb_bond=0
# Each core write into own pcap file, which is open one time, close one time if enough.
# Support dump the first snaplen bytes of each packet.
# if pcap file is lager than savelen bytes, it will be closed and next file was dumped into.
[pcap]
enable=0
snaplen=96
savelen=16777216
savepath=.
# Port config section
# Correspond to dpdk.port_list's index: port0, port1...
[port0]
addr=<ADDR>
netmask=<NETMASK>
broadcast=<BROADCAST>
gateway=<GATEWAY>
# IPv6 net addr, Optional parameters.
#addr6=ff::02
#prefix_len=64
#gateway6=ff::01
# Multi virtual IPv4/IPv6 net addr, Optional parameters.
# `vip_ifname`: default `f-stack-x`
# `vip_addr`: Separated by semicolons, MAX number 64;
# Only support netmask 255.255.255.255, broadcast x.x.x.255 no w, hard code in `ff_veth_setvaddr`.
# `vip_addr6`: Separated by semicolons, MAX number 64.
# `vip_prefix_len`: All addr6 use the same prefix now, default 64.
#vip_ifname=lo0
#vip_addr=192.168.1.3;192.168.1.4;192.168.1.5;192.168.1.6
#vip_addr6=ff::03;ff::04;ff::05;ff::06;ff::07
#vip_prefix_len=64
# lcore list used to handle this port
# the format is same as port_list
#lcore_list=0
# bonding slave port list used to handle this port
# need to config while this port is a bonding port
# the format is same as port_list
#slave_port_list=0,1
# Vdev config section
# orrespond to dpdk.nb_vdev's index: vdev0, vdev1...
# iface : Shouldn't set always.
# path : The vuser device path in container. Required.
# queues : The max queues of vuser. Optional, default 1, greater or equal to the number of processes.
# queue_size : Queue size.Optional, default 256.
# mac : The mac address of vuser. Optional, default random, if vhost use phy NIC, it should be set to the phy NIC's mac.
# cq : Optional, if queues = 1, default 0; if queues > 1 default 1.
#[vdev0]
##iface=/usr/local/var/run/openvswitch/vhost-user0
#path=/var/run/openvswitch/vhost-user0
#queues=1
#queue_size=256
#mac=00:00:00:00:00:01
#cq=0
# bond config section
# See http://doc.dpdk.org/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
#[bond0]
#mode=4
#slave=0000:0a:00.0,slave=0000:0a:00.1
#primary=0000:0a:00.0
#mac=f0:98:38:xx:xx:xx
## opt argument
#socket_id=0
#xmit_policy=l23
#lsc_poll_period_ms=100
#up_delay=10
#down_delay=50
# Kni config: if enabled and method=reject,
# all packets that do not belong to the following tcp_port and udp_port
# will transmit to kernel; if method=accept, all packets that belong to
# the following tcp_port and udp_port will transmit to kernel.
#[kni]
#enable=1
#method=reject
# The format is same as port_list
#tcp_port=80,443
#udp_port=53
# FreeBSD network performance tuning configurations.
# Most native FreeBSD configurations are supported.
[freebsd.boot]
hz=100
# Block out a range of descriptors to avoid overlap
# with the kernel's descriptor space.
# You can increase this value according to your app.
fd_reserve=1024
kern.ipc.maxsockets=262144
net.inet.tcp.syncache.hashsize=4096
net.inet.tcp.syncache.bucketlimit=100
net.inet.tcp.tcbhashsize=65536
kern.ncallout=262144
kern.features.inet6=1
net.inet6.ip6.auto_linklocal=1
net.inet6.ip6.accept_rtadv=2
net.inet6.icmp6.rediraccept=1
net.inet6.ip6.forwarding=0
[freebsd.sysctl]
kern.ipc.somaxconn=32768
kern.ipc.maxsockbuf=16777216
net.link.ether.inet.maxhold=5
net.inet.tcp.fast_finwait2_recycle=1
net.inet.tcp.sendspace=16384
net.inet.tcp.recvspace=8192
#net.inet.tcp.nolocaltimewait=1
net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.recvbuf_auto=1
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
net.inet.tcp.sack.enable=1
net.inet.tcp.blackhole=1
net.inet.tcp.msl=2000
net.inet.tcp.delayed_ack=0
net.inet.udp.blackhole=1
net.inet.ip.redirect=0
net.inet.ip.forwarding=0
Edit 2: I added pci_whitelist=[PCIe BDF of NIC] to config and ran the following command:
The reason for the error is because of the check for virtio PMD in function virtio_dev_configure file [dpdk root folder]drivers/net/virtio/virtio_ethdev.c. This can be due to the current Fstack enables RSS for better flow distribution over its port-queue pair.
There 2 possible solution to fix the problem is to
find the configuration parameter in f-stack.conf to disable RSS or
change the FSTACK port configuration logic not to use RSS (by editing code).
File: lib/ff_dpdk_if.c
edit: line 627 from port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS; to port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE; and rebuild the fstack
Note: if you use physical NIC RSS is supported in most of cases. hence there will be no error there.
I'm currently trying to access the kafka cluster(bitnami) from my local machine, however the problem is that even after exposing the required host and ports in server.properties and adding firewall rules to allow 9092 port it just doesn't connect.
I'm running 2 broker and 1 zookeeper configuration.
Expected Output: Producer.bootstrap_connected() should return True.
Actual Output: False
server.properties
listeners=SASL_PLAINTEXT://:9092
advertised.listeners=SASL_PLAINTEXT://gcp-cluster-name:9092
sasl.mechanism.inter.broker.protocol=PLAIN`
sasl.enabled.mechanisms=PLAIN
security.inter.broker.protocol=SASL_PLAINTEXT
Consumer.py
from kafka import KafkaConsumer
import json
sasl_mechanism = 'PLAIN'
security_protocol = 'SASL_PLAINTEXT'
# Create a new context using system defaults, disable all but TLS1.2
context = ssl.create_default_context()
context.options &= ssl.OP_NO_TLSv1
context.options &= ssl.OP_NO_TLSv1_1
consumer = KafkaConsumer('organic-sense',
bootstrap_servers='<server-ip>:9092',
value_deserializer=lambda x: json.loads(x.decode('utf-8')),
ssl_context=context,
sasl_plain_username='user',
sasl_plain_password='<password>',
sasl_mechanism=sasl_mechanism,
security_protocol = security_protocol,
)
print(consumer.bootstrap_connected())
for data in consumer:
print(data)
I have an Consumer using akka streams where we are consuming packets from kafka. The number of threads are in several thousands and always increasing.
Tried taking thread dump where we found two kinds of threads which have maxiumum count :
Following is the config file
akka.kafka.consumer {
# Tuning property of scheduled polls.
# Controls the interval from one scheduled poll to the next.
poll-interval = 250ms
# Tuning property of the `KafkaConsumer.poll` parameter.
# Note that non-zero value means that the thread that
# is executing the stage will be blocked. See also the `wakup-timeout` setting below.
poll-timeout = 50ms
# The stage will delay stopping the internal actor to allow processing of
# messages already in the stream (required for successful committing).
# Prefer use of `DrainingControl` over a large stop-timeout.
stop-timeout = 30s
# Duration to wait for `KafkaConsumer.close` to finish.
close-timeout = 20s
# If offset commit requests are not completed within this timeout
# the returned Future is completed `CommitTimeoutException`.
# The `Transactional.source` waits this amount of time for the producer to mark messages as not
# being in flight anymore as well as waiting for messages to drain, when rebalance is triggered.
commit-timeout = 15s
# If commits take longer than this time a warning is logged
commit-time-warning = 1s
# Not used anymore (since 1.0-RC1)
# wakeup-timeout = 3s
# Not used anymore (since 1.0-RC1)
# max-wakeups = 10
# If set to a finite duration, the consumer will re-send the last committed offsets periodically
# for all assigned partitions. See https://issues.apache.org/jira/browse/KAFKA-4682.
commit-refresh-interval = infinite
# Not used anymore (since 1.0-RC1)
# wakeup-debug = true
# Fully qualified config path which holds the dispatcher configuration
# to be used by the KafkaConsumerActor. Some blocking may occur.
use-dispatcher = "akka.default-dispatcher"
# Properties defined by org.apache.kafka.clients.consumer.ConsumerConfig
# can be defined in this configuration section.
kafka-clients {
# Disable auto-commit by default
enable.auto.commit = false
}
# Time to wait for pending requests when a partition is closed
wait-close-partition = 500ms
# Limits the query to Kafka for a topic's position
position-timeout = 5s
# When using `AssignmentOffsetsForTimes` subscriptions: timeout for the
# call to Kafka's API
offset-for-times-timeout = 5s
# Timeout for akka.kafka.Metadata requests
# This value is used instead of Kafka's default from `default.api.timeout.ms`
# which is 1 minute.
metadata-request-timeout = 5s
# Interval for checking that transaction was completed before closing the consumer.
# Used in the transactional flow for exactly-once-semantics processing.
eos-draining-check-interval = 30ms
}
#####################################
# Akka Stream Reference Config File #
#####################################
akka {
stream {
# Default materializer settings
materializer {
# Initial size of buffers used in stream elements
initial-input-buffer-size = 4
# Maximum size of buffers used in stream elements
max-input-buffer-size = 8
# Fully qualified config path which holds the dispatcher configuration
# to be used by ActorMaterializer when creating Actors.
# When this value is left empty, the default-dispatcher will be used.
dispatcher = "akka.default-dispatcher"
# Cleanup leaked publishers and subscribers when they are not used within a given
# deadline
subscription-timeout {
# when the subscription timeout is reached one of the following strategies on
# the "stale" publisher:
# cancel - cancel it (via `onError` or subscribing to the publisher and
# `cancel()`ing the subscription right away
# warn - log a warning statement about the stale element (then drop the
# reference to it)
# noop - do nothing (not recommended)
mode = cancel
# time after which a subscriber / publisher is considered stale and eligible
# for cancelation (see `akka.stream.subscription-timeout.mode`)
timeout = 5s
}
# Enable additional troubleshooting logging at DEBUG log level
debug-logging = off
# Maximum number of elements emitted in batch if downstream signals large demand
output-burst-limit = 1000
# Enable automatic fusing of all graphs that are run. For short-lived streams
# this may cause an initial runtime overhead, but most of the time fusing is
# desirable since it reduces the number of Actors that are created.
# Deprecated, since Akka 2.5.0, setting does not have any effect.
auto-fusing = on
# Those stream elements which have explicit buffers (like mapAsync, mapAsyncUnordered,
# buffer, flatMapMerge, Source.actorRef, Source.queue, etc.) will preallocate a fixed
# buffer upon stream materialization if the requested buffer size is less than this
# configuration parameter. The default is very high because failing early is better
# than failing under load.
#
# Buffers sized larger than this will dynamically grow/shrink and consume more memory
# per element than the fixed size buffers.
max-fixed-buffer-size = 1000000000
# Maximum number of sync messages that actor can process for stream to substream communication.
# Parameter allows to interrupt synchronous processing to get upsteam/downstream messages.
# Allows to accelerate message processing that happening withing same actor but keep system responsive.
sync-processing-limit = 1000
debug {
# Enables the fuzzing mode which increases the chance of race conditions
# by aggressively reordering events and making certain operations more
# concurrent than usual.
# This setting is for testing purposes, NEVER enable this in a production
# environment!
# To get the best results, try combining this setting with a throughput
# of 1 on the corresponding dispatchers.
fuzzing-mode = off
}
}
# Fully qualified config path which holds the dispatcher configuration
# to be used by ActorMaterializer when creating Actors for IO operations,
# such as FileSource, FileSink and others.
blocking-io-dispatcher = "akka.stream.default-blocking-io-dispatcher"
default-blocking-io-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 16
}
}
}
# configure overrides to ssl-configuration here (to be used by akka-streams, and akka-http – i.e. when serving https connections)
ssl-config {
protocol = "TLSv1.2"
}
}
ssl-config {
logger = "com.typesafe.sslconfig.akka.util.AkkaLoggerBridge"
}
akka.default-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 5
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 20
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 5
}
Following are the errors with maximum thread count :
one is :
priority:5 - threadId:0x00007f9414008800 - nativeId:0x5575 - nativeId (decimal):21877 - state:WAITING
stackTrace:
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000740597170> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at akka.dispatch.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Locked ownable synchronizers:
Other is :
default-scheduler-1
priority:5 - threadId:0x00007f947805d800 - nativeId:0x638a - nativeId (decimal):25482 - state:TIMED_WAITING
stackTrace:
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at akka.actor.LightArrayRevolverScheduler.waitNanos(LightArrayRevolverScheduler.scala:87)
at akka.actor.LightArrayRevolverScheduler$$anon$3.nextTick(LightArrayRevolverScheduler.scala:271)
at akka.actor.LightArrayRevolverScheduler$$anon$3.run(LightArrayRevolverScheduler.scala:241)
at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
- None
The CPU utilization is going till 800 %. What is it that is causing such a high CPU Utilization. Please advise.
I use the syslog plugin available with kong api gateway and I have the following:
{ "api_id": "some_id",
"id": "some_id",
"created_at": 4544444,
"enabled": true,
"name": "syslog",
"config":
{ "client_errors_severity": "info",
"server_errors_severity": "info",
"successful_severity": "info",
"log_level": "emerg"
} }
I use a centos7 ,and i have the following conf file(/etc/rsyslog.conf)
# rsyslog configuration file
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### MODULES ####
# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark # provides --MARK-- message capability
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$template precise,"%syslogpriority%,%syslogfacility%,%timegenerated%,%HOSTNAME%,%syslogtag%,%msg%\n"
$ActionFileDefaultTemplate precise
# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on
# File to store the position in the journal
$IMJournalStateFile imjournal.state
#### RULES ####
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none /var/log/messages
# The authpriv file has restricted access.
authpriv.* /var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog
# Log cron stuff
cron.* /var/log/cron
# Everybody gets emergency messages
*.emerg :omusrmsg:*
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* /var/log/boot.log
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList # run asynchronously
#$ActionResumeRetryCount -1 # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* ##remote-host:514
# ### end of the forwarding rule ###
Syslog daemon is running on the centos 7 machine and it's configured with logging level severity(info) same as or lower than the set config.log_level(emerg).
I am unable to see any logs on
/var/log/messages
syslog daemon configured with logging level severity same as or lower than the set config.log_level for proper logging.
I'm using the following project for enabling APNS in my project:
https://github.com/stephenmuss/django-ios-notifications
I'm able to send and receive push notifications on my production app fine, but the sandbox apns is having strange issues which i'm not able to solve. It's constantly not connecting to the push service. When I do manually the _connect() on the APNService or FeedbackService classes, I get the following error:
File "/Users/MyUser/git/prod/django/ios_notifications/models.py", line 56, in _connect
self.connection.do_handshake()
Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure')]
I tried recreating the APN certificate a number of times and constantly get the same error. Is there anything else i'm missing?
I'm using the endpoints gateway.push.apple.com and gateway.sandbox.push.apple.com for connecting to the service. Is there anything else I should look into for this? I have read the following:
Apns php error "Failed to connect to APNS: 110 Connection timed out."
Converting PKCS#12 certificate into PEM using OpenSSL
Error Using PHP for iPhone APNS
Turns out Apple changed ssl context from SSL3 to TLSv1 in development. They will do this in Production eventually (not sure when). The following link shows my pull request which was accepted into the above project:
https://github.com/stephenmuss/django-ios-notifications/commit/879d589c032b935ab2921b099fd3286440bc174e
Basically, use OpenSSL.SSL.TLSv1_METHOD if you're using python or something similar in other languages.
Although OpenSSL.SSL.SSLv3_METHOD works in production, it may not work in the near future. OpenSSL.SSL.TLSv1_METHOD works in production and development.
UPDATE
Apple will remove SSL 3.0 support in production on October 29th, 2014 due to the poodle flaw.
https://developer.apple.com/news/?id=10222014a
I have worked on APN using python-django, for this you need three things URL, PORT and Certificate provided by Apple for authentication.
views.py
import socket, ssl, json, struct
theCertfile = '/tmp/abc.cert' ## absolute path where certificate file is placed.
ios_url = 'gateway.push.apple.com'
ios_port = 2195
deviceToken = '3234t54tgwg34g' ## ios device token to which you want to send notification
def ios_push(msg, theCertfile, ios_url, ios_port, deviceToken):
thePayLoad = {
'aps': {
'alert':msg,
'sound':'default',
'badge':0,
},
}
theHost = ( ios_url, ios_port )
data = json.dumps( thePayLoad )
deviceToken = deviceToken.replace(' ','')
byteToken = deviceToken.decode('hex') # Python 2
theFormat = '!BH32sH%ds' % len(data)
theNotification = struct.pack( theFormat, 0, 32, byteToken, len(data), data )
# Create our connection using the certfile saved locally
ssl_sock = ssl.wrap_socket( socket.socket( socket.AF_INET, socket.SOCK_STREAM ), certfile = theCertfile )
ssl_sock.connect( theHost )
# Write out our data
ssl_sock.write( theNotification )
# Close the connection -- apple would prefer that we keep
# a connection open and push data as needed.
ssl_sock.close()
Hopefully this would work for you.