Context:
We are working on migration of the driver, which is currently represented as a kernel extension, to the DriverKit framework.
The driver works with Thunderbolt RAID storage devices.
When connected through the Thunderbolt interface to the host, the device itself presents in the OS as a PCI device. The main function of our driver (.kext) is to create a "virtual" SCSI device in the OS for each virtual RAID array. So that the OS can work with these SCSI drives as usual disk storage.
We use https://developer.apple.com/documentation/scsicontrollerdriverkit to migrate this functionality in the dext version of the driver.
Current issue:
When a device is connected - the dext driver cannot create a SCSI drive in the OS.
Technically our dext tries to create a SCSI drive using the UserCreateTargetForID() method.
On this step the OS sends the first SCSI command "Test Unit Ready" to the device to check if it is a SCSI device.
We process this command in an additional thread separated from the main process of the dext (as it is recommended in the DriverKit documentation).
We can see in the logs that the device receives this command and responses but when our dext sends this response to the OS the process is stuck in the waiting mode. How can we understand why it happens and fix it?
More details:
We are migrating functionality of an already existing “.kext” driver. We checked logs of the kext driver of this step:
15:06:17.902539+0700 Target device try to create for idx:0
15:06:17.902704+0700 Send command 0 for target 0 len 0
15:06:18.161777+0700 Complete command: 0 for target: 0 Len: 0 status: 0 flags: 0
15:06:18.161884+0700 Send command 18 for target 0 len 6
15:06:18.161956+0700 Complete command: 18 for target: 0 Len: 6 status: 0 flags: 0
15:06:18.162010+0700 Send command 18 for target 0 len 44
15:06:18.172972+0700 Complete command: 18 for target: 0 Len: 44 status: 0 flags: 0
15:06:18.275501+0700 Send command 18 for target 0 len 36
15:06:18.275584+0700 Complete command: 18 for target: 0 Len: 36 status: 0 flags: 0
15:06:18.276257+0700 Target device created for idx:0
We can see a successful message “Target device created for idx:0”
In the the dext logs of the same step:
We do not see the “Send command 18 for target 0 len 6” as we have in the kext logs
no log of the successful result “Target device created for idx:0”
I'll add a thread name to each line of the dext log (CustomThread,DefaultQueue,SendCommandCustomThread,InterruptQueue):
15:54:10.903466+0700 Try to create target for 0 UUID 432421434863538456 - CustomThread
15:54:10.903633+0700 UserDoesHBAPerformAutoSense - DefaultQueue
15:54:10.903763+0700 UserInitializeTargetForID - DefaultQueue
15:54:10.903876+0700 UserDoesHBASupportMultiPathing DefaultQueue
15:54:10.904200+0700 UserProcessParallelTask start - DefaultQueue
15:54:10.904298+0700 Sent command : 0 len 0 for target 0 - SendCommandCustomThread
15:54:11.163003+0700 Disable interrupts - InterruptQueue
15:54:11.163077+0700 Complete cmd : 0 for target: 0 len: 0 status: 0 flags: 0 - InterruptQueue
15:54:11.163085+0700 Enable interrupts - InterruptQueue
Code for complete task
SCSIUserParallelResponse osRsp = {0};
osRsp.fControllerTaskIdentifier = osTask->taskId;
osRsp.fTargetID = osTask->targetId;
osRsp.fServiceResponse = kSCSIServiceResponse_TASK_COMPLETE;
osRsp.fCompletionStatus = (SCSITaskStatus) response->status;
// Transfer length computation.
osRsp.fBytesTransferred = transferLength; // === 0 for this case.
ParallelTaskCompletion(osTask->action, osRsp);
osTask->action->release();
Will appreciate any help
This is effectively a deadlock, which you seem to have already worked out. It's not 100% clear from your your question, but as I initially had the same problem, I assume you're calling UserCreateTargetForID from the driver's default queue. This won't work, you must call it from a non-default queue because SCSIControllerDriverKit assumes that your default queue is idle and ready to handle requests from the kernel while you are calling this function. The header docs are very ambiguous on this, though they do mention it:
The dext class should call this method to create a new target for the
targetID. The framework ensures that the new target is created before the call returns.
Note that this call to the framework runs on the Auxiliary queue.
SCSIControllerDriverKit expects your driver to use 3 different dispatch queues (default, auxiliary, and interrupt), although I think it can be done with 2 as well. I recommend you (re-)watch the relevant part of the WWDC2020 session video about how Apple wants you to use the 3 dispatch queues, exactly. The framework does not seem to be very flexible on this point.
Good luck with the rest of the driver port, I found this DriverKit framework even more fussy than the other ones.
Thanks to pmdj for direction of think. For my case answer is just add initialization for version field for response.
osRsp.version = kScsiUserParallelTaskResponseCurrentVersion1;
It looks obvious. But there are no any information in docs or WWDC2020 video about initialization version field.
My project is hardware raid 'user space driver' . My driver has now completed the io stress test. Your problem should be in the SCSI command with data transfer. And you want to send data to the system by your software driver to complete the SCSI ' inquiry ' command. I think you also used 'UserGetDataBuffer'. It seems to be some distance from iokit's function.
kern_return_t IMPL ( XXXXUserSpaceDriver, UserProcessParallelTask )
{
/*
**********************************************************************
** UserGetDataBuffer
**********************************************************************
*/
if(parallelTask.fCommandDescriptorBlock[0] == SCSI_CMD_INQUIRY)
{
IOBufferMemoryDescriptor *data_buffer_memory_descriptor = nullptr;
/*
******************************************************************************************************************************************
** virtual kern_return_t UserGetDataBuffer(SCSIDeviceIdentifier fTargetID, uint64_t fControllerTaskIdentifier, IOBufferMemoryDescriptor **buffer);
******************************************************************************************************************************************
*/
if((UserGetDataBuffer(parallelTask.fTargetID, parallelTask.fControllerTaskIdentifier, &data_buffer_memory_descriptor) == kIOReturnSuccess) && (data_buffer_memory_descriptor != NULL))
{
IOAddressSegment data_buffer_virtual_address_segment = {0};
if(data_buffer_memory_descriptor->GetAddressRange(&data_buffer_virtual_address_segment) == kIOReturnSuccess)
{
IOAddressSegment data_buffer_physical_address_segment = {0};
IODMACommandSpecification dmaSpecification;
IODMACommand *data_buffer_iodmacommand = {0};
bzero(&dmaSpecification, sizeof(dmaSpecification));
dmaSpecification.options = kIODMACommandSpecificationNoOptions;
dmaSpecification.maxAddressBits = 64;
if(IODMACommand::Create(ivars->pciDevice, kIODMACommandCreateNoOptions, &dmaSpecification, &data_buffer_iodmacommand) == kIOReturnSuccess)
{
uint64_t dmaFlags = kIOMemoryDirectionInOut;
uint32_t dmaSegmentCount = 1;
pCCB->data_buffer_iodmacommand = data_buffer_iodmacommand;
if(data_buffer_iodmacommand->PrepareForDMA(kIODMACommandPrepareForDMANoOptions, data_buffer_memory_descriptor, 0/*offset*/, parallelTask.fRequestedTransferCount/*length*/, &dmaFlags, &dmaSegmentCount, &data_buffer_physical_address_segment) == kIOReturnSuccess)
{
parallelTask.fBufferIOVMAddr = (uint64_t)data_buffer_physical_address_segment.address; /* data_buffer_physical_address: overwrite original fBufferIOVMAddr */
pCCB->OSDataBuffer = reinterpret_cast <uint8_t *> (data_buffer_virtual_address_segment.address);/* data_buffer_virtual_address */
}
}
}
}
}
}
response.fBytesTransferred = dataxferlen;
response.version = kScsiUserParallelTaskResponseCurrentVersion1;
response.fTargetID = TARGETLUN2SCSITARGET(TargetID, 0);
response.fControllerTaskIdentifier = pCCB->fControllerTaskIdentifier;
response.fCompletionStatus = taskStatus;
response.fServiceResponse = serviceResponse;
response.fSenseLength = taskStatus;
IOUserSCSIParallelInterfaceController::ParallelTaskCompletion(pCCB->completion, response);
pCCB->completion->release();
pCCB->completion = NULL;
pCCB->ccb_flags.start = 0;/*reset startdone for outstanding ccb check*/
if(pCCB->data_buffer_iodmacommand != NULL)
{
pCCB->data_buffer_iodmacommand->CompleteDMA(kIODMACommandCompleteDMANoOptions);
OSSafeReleaseNULL(pCCB->data_buffer_iodmacommand); // pCCB->data_buffer_iodmacommand->free(); pCCB->data_buffer_iodmacommand = NULL;
pCCB->OSDataBuffer = NULL;
}
Related
I am trying to set a HTCondor batch system, but when I do condor_status it only shows the master in both the master and worker nodes. They both show this:
Name OpSys Arch State Activity LoadAv Mem
[master ip] LINUX X86_64 Unclaimed Idle 0.000 973
Total Owner Claimed Unclaimed Matched Preempting Backfill Drain
X86_64/LINUX 1 0 0 1 0 0 0 0
Total 1 0 0 1 0 0 0 0
Condor_restart on the master node works fine, but on the worker nodes yields this error:
ERROR
SECMAN:2010:Received "DENIED" from server for user unauthenticated#unmapped using no authentication method, which may imply host-based security. Our address was '[ip address of master]', and server's address was '[ip address of worker]'. Check your ALLOW settings and IP protocols.
Here are the config files:
of the master node:
CONDOR_HOST = [private ip of master]
DAEMON_LIST = COLLECTOR, MASTER, NEGOTIATOR, SCHEDD, STARTD
# to avoid user authentication
HOSTALLOW_READ = *
HOSTALLOW_WRITE = *
HOSTALLOW_ADMINISTRATOR = *
of the worker node:
CONDOR_HOST = [private ip of master]
DAEMON_LIST = MASTER, STARTD
# to avoid user authentication
HOSTALLOW_READ = *
HOSTALLOW_WRITE = *
HOSTALLOW_ADMINISTRATOR = *
I am allowing on the same security group:
All TCP TCP 0 - 65535
All ICMP-IPv4 All
SSH on port 22
This is how it looks like (security group ending in '6')
Apparently the issue was running condor_reconfig -full. I just reinstalled it without doing that and using systemctl restart condor instead and it worked. If someone wants to bring some insight on why it was so please do so :)
I have a C++ program which does lots of stuff, but most importantly it is setup to use F-Stack, which is built on DPDK:
int main(int argc, char * argv[])
{
ff_init(argc, argv);
...
}
And I run the program like this:
sudo ./main --conf /etc/f-stack.conf --proc-type=primary
This is the error message I am receiving:
virtio_dev_configure(): Unsupported Rx multi queue mode 1
Port0 dev_configure = -22
EAL: Error - exiting with code: 1
Cause: init_port_start failed
I have not had this problem before when running this executable on a Centos 8 AWS instance. I am now running this on a Centos 8 Alibaba Cloud instance. So there's possibly some difference when running on Alibaba.
The only other thing I can think of is that there might be a configuration problem. However, I copied /etc/f-stack.conf from my AWS instance to Alibaba and updated some IP addresses, nothing else. So nothing significant has changed.
Any idea what's going on here and how to fix it?
Edit: here is my /etc/f-stack.conf file (without IP addresses included):
[dpdk]
# Hexadecimal bitmask of cores to run on.
lcore_mask=1
# Number of memory channels.
channel=4
# Specify base virtual address to map.
#base_virtaddr=0x7f0000000000
# Promiscuous mode of nic, defualt: enabled.
promiscuous=1
numa_on=1
# TX checksum offload skip, default: disabled.
# We need this switch enabled in the following cases:
# -> The application want to enforce wrong checksum for testing purposes
# -> Some cards advertize the offload capability. However, doesn't calculate che cksum.
tx_csum_offoad_skip=0
# TCP segment offload, default: disabled.
tso=0
# HW vlan strip, default: enabled.
vlan_strip=1
# sleep when no pkts incoming
# unit: microseconds
idle_sleep=0
# sent packet delay time(0-100) while send less than 32 pkts.
# default 100 us.
# if set 0, means send pkts immediately.
# if set >100, will dealy 100 us.
# unit: microseconds
pkt_tx_delay=100
# use symmetric Receive-side Scaling(RSS) key, default: disabled.
symmetric_rss=0
# PCI device enable list.
# And driver options
#pci_whitelist=02:00.0
# enabled port list
#
# EBNF grammar:
#
# exp ::= num_list {"," num_list}
# num_list ::= <num> | <range>
# range ::= <num>"-"<num>
# num ::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9'
#
# examples
# 0-3 ports 0, 1,2,3 are enabled
# 1-3,4,7 ports 1,2,3,4,7 are enabled
#
# If use bonding, shoule config the bonding port id in port_list
# and not config slave port id in port_list
# such as, port 0 and port 1 trank to a bonding port 2,
# should set `port_list=2` and config `[port2]` section
port_list=0
# Number of vdev.
nb_vdev=0
# Number of bond.
nb_bond=0
# Each core write into own pcap file, which is open one time, close one time if enough.
# Support dump the first snaplen bytes of each packet.
# if pcap file is lager than savelen bytes, it will be closed and next file was dumped into.
[pcap]
enable=0
snaplen=96
savelen=16777216
savepath=.
# Port config section
# Correspond to dpdk.port_list's index: port0, port1...
[port0]
addr=<ADDR>
netmask=<NETMASK>
broadcast=<BROADCAST>
gateway=<GATEWAY>
# IPv6 net addr, Optional parameters.
#addr6=ff::02
#prefix_len=64
#gateway6=ff::01
# Multi virtual IPv4/IPv6 net addr, Optional parameters.
# `vip_ifname`: default `f-stack-x`
# `vip_addr`: Separated by semicolons, MAX number 64;
# Only support netmask 255.255.255.255, broadcast x.x.x.255 no w, hard code in `ff_veth_setvaddr`.
# `vip_addr6`: Separated by semicolons, MAX number 64.
# `vip_prefix_len`: All addr6 use the same prefix now, default 64.
#vip_ifname=lo0
#vip_addr=192.168.1.3;192.168.1.4;192.168.1.5;192.168.1.6
#vip_addr6=ff::03;ff::04;ff::05;ff::06;ff::07
#vip_prefix_len=64
# lcore list used to handle this port
# the format is same as port_list
#lcore_list=0
# bonding slave port list used to handle this port
# need to config while this port is a bonding port
# the format is same as port_list
#slave_port_list=0,1
# Vdev config section
# orrespond to dpdk.nb_vdev's index: vdev0, vdev1...
# iface : Shouldn't set always.
# path : The vuser device path in container. Required.
# queues : The max queues of vuser. Optional, default 1, greater or equal to the number of processes.
# queue_size : Queue size.Optional, default 256.
# mac : The mac address of vuser. Optional, default random, if vhost use phy NIC, it should be set to the phy NIC's mac.
# cq : Optional, if queues = 1, default 0; if queues > 1 default 1.
#[vdev0]
##iface=/usr/local/var/run/openvswitch/vhost-user0
#path=/var/run/openvswitch/vhost-user0
#queues=1
#queue_size=256
#mac=00:00:00:00:00:01
#cq=0
# bond config section
# See http://doc.dpdk.org/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
#[bond0]
#mode=4
#slave=0000:0a:00.0,slave=0000:0a:00.1
#primary=0000:0a:00.0
#mac=f0:98:38:xx:xx:xx
## opt argument
#socket_id=0
#xmit_policy=l23
#lsc_poll_period_ms=100
#up_delay=10
#down_delay=50
# Kni config: if enabled and method=reject,
# all packets that do not belong to the following tcp_port and udp_port
# will transmit to kernel; if method=accept, all packets that belong to
# the following tcp_port and udp_port will transmit to kernel.
#[kni]
#enable=1
#method=reject
# The format is same as port_list
#tcp_port=80,443
#udp_port=53
# FreeBSD network performance tuning configurations.
# Most native FreeBSD configurations are supported.
[freebsd.boot]
hz=100
# Block out a range of descriptors to avoid overlap
# with the kernel's descriptor space.
# You can increase this value according to your app.
fd_reserve=1024
kern.ipc.maxsockets=262144
net.inet.tcp.syncache.hashsize=4096
net.inet.tcp.syncache.bucketlimit=100
net.inet.tcp.tcbhashsize=65536
kern.ncallout=262144
kern.features.inet6=1
net.inet6.ip6.auto_linklocal=1
net.inet6.ip6.accept_rtadv=2
net.inet6.icmp6.rediraccept=1
net.inet6.ip6.forwarding=0
[freebsd.sysctl]
kern.ipc.somaxconn=32768
kern.ipc.maxsockbuf=16777216
net.link.ether.inet.maxhold=5
net.inet.tcp.fast_finwait2_recycle=1
net.inet.tcp.sendspace=16384
net.inet.tcp.recvspace=8192
#net.inet.tcp.nolocaltimewait=1
net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.recvbuf_auto=1
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
net.inet.tcp.sack.enable=1
net.inet.tcp.blackhole=1
net.inet.tcp.msl=2000
net.inet.tcp.delayed_ack=0
net.inet.udp.blackhole=1
net.inet.ip.redirect=0
net.inet.ip.forwarding=0
Edit 2: I added pci_whitelist=[PCIe BDF of NIC] to config and ran the following command:
The reason for the error is because of the check for virtio PMD in function virtio_dev_configure file [dpdk root folder]drivers/net/virtio/virtio_ethdev.c. This can be due to the current Fstack enables RSS for better flow distribution over its port-queue pair.
There 2 possible solution to fix the problem is to
find the configuration parameter in f-stack.conf to disable RSS or
change the FSTACK port configuration logic not to use RSS (by editing code).
File: lib/ff_dpdk_if.c
edit: line 627 from port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS; to port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE; and rebuild the fstack
Note: if you use physical NIC RSS is supported in most of cases. hence there will be no error there.
I'm trying to run an Akka stream application, but get an exception when running on linux.
When I run it with Windows debugger it is working.
I tried both these commands:
java -jar ./myService.jar -Dconfig.resource=/opt/myservice/conf/application.conf
java -jar ./myService.jar -Dconfig.file=/opt/myService/conf/application.conf
But I get the following exception:
No configuration setting found for key 'akka.stream'
My application.conf file:
akka {
event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]
loglevel = "DEBUG"
actor {
debug {
# enable function of LoggingReceive, which is to log any received message
at
# DEBUG level
receive = on
}
}
stream {
# Default materializer settings
materializer {
max-input-buffer-size = 16
dispatcher = ""
subscription-timeout {
mode = cancel
timeout = 5s
}
# Enable additional troubleshooting logging at DEBUG log level
debug-logging = off
# Maximum number of elements emitted in batch if downstream signals large demand
output-burst-limit = 1000
auto-fusing = on
# Those stream elements which have explicit buffers (like mapAsync, mapAsyncUnordered,
# buffer, flatMapMerge, Source.actorRef, Source.queue, etc.) will preallocate a fixed
# buffer upon stream materialization if the requested buffer size is less than this
max-fixed-buffer-size = 1000000000
sync-processing-limit = 1000
debug {
fuzzing-mode = off
}
}
blocking-io-dispatcher = "akka.stream.default-blocking-io-dispatcher"
default-blocking-io-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 16
}
}
}
# configure overrides to ssl-configuration here (to be used by akka-streams,
and akka-http – i.e. when serving https connections)
ssl-config {
protocol = "TLSv1.2"
}
}
ssl-config {
logger = "com.typesafe.sslconfig.akka.util.AkkaLoggerBridge"
}
i've added:
println(system.settings.config)
but i get a result without stream section
Can you assist?
The syntax for the java command line is:
java [options] -jar filename [args]
This ordering matters: you must set any options before the -jar option.
So in your case:
java -Dconfig.file=/opt/myService/conf/application.conf -jar ./myService.jar
I have a simulation where two modules UDPBasicApp (a client and a server) are connected together via an Ethernet link. Instead, I want that they be connected together through a wireless channel. The network is defined by the following NED file:
package udpbasic;
import inet.networklayer.autorouting.ipv4.IPv4NetworkConfigurator;
import inet.nodes.ethernet.Eth10M;
import inet.nodes.inet.StandardHost;
network ClientServer
{
#display("bgb=380,247");
submodules:
client: StandardHost
{
#display("p=84,100");
}
server: StandardHost
{
#display("p=278,100");
}
configurator: IPv4NetworkConfigurator
{
#display("p=181,188");
}
connections:
client.ethg++ <--> Eth10M <--> server.ethg++;
}
I know that I have to change the line
client.ethg++ <--> Eth10M <--> server.ethg++;
where the Ethernet link is defined. Can I connect the client and the server trough
a wireless link? Obviously, I am looking for the most basic configuration.
I am new in OMNeT++ and INET; I have searched the INET API reference, and it doesn't
help so much. I would thank any suggestion.
I recommend reading the wireless tutorial in INET 3.0.
https://github.com/inet-framework/inet/blob/master/tutorials/wireless/omnetpp.ini
Ini file:
[General]
# Some global configuration to make the model simpler
# At this point you should take a look at the NED files corresponding to this Ini file.
# They are rather simple. The only interesting thing is that they are using parametrized types
# (i.e. like) so we will be able to change the type of the different modules from the Ini file.
# This allows us go through the tutorial only by changing parameter values in this file.
# Limit the simulation to 25s
sim-time-limit = 25s
# Let's configure ARP
# ARP in the real world is used to figure out the MAC address of a node from its IPv4 address.
# We do not want to use it in this wireless tutorial as it just adds some uninteresting
# message exchanges before the real communication between the nodes can start. We will use
# the GlobalARP module instead that can automatically provide all the MAC-IP assocoations
# for the nodes out of band.
**.arpType = "GlobalARP"
# Now we are ready to jump into the tutorial
[Config Wireless01]
description = Two nodes communicating via UDP
network = WirelessA
# Configure an application for hostA that sends a constant
# UDP traffic around 800Kbps (+ protocol overhead)
*.hostA.numUdpApps = 1
*.hostA.udpApp[0].typename = "UDPBasicApp"
*.hostA.udpApp[0].destAddresses = "hostB"
*.hostA.udpApp[0].destPort = 5000
*.hostA.udpApp[0].messageLength = 1000B
*.hostA.udpApp[0].sendInterval = exponential(10ms)
# Configure an app that receives the USP traffic (and simply drops the data)
*.hostB.numUdpApps = 1
*.hostB.udpApp[0].typename = "UDPSink"
*.hostB.udpApp[0].localPort = 5000
# Configure the hosts to have a single "ideal" wireless NIC. An IdealWirelessNic
# can be configured with a maximum communication range. All packets withing range
# are always received successfully while out of range messages are never received.
# This is useful if we are not interested how the actual messages get to their destination,
# we just want to be sure that they get there once the nodes are in range.
*.host*.wlan[*].typename = "IdealWirelessNic"
# All radios and MACs should run on 1Mbps in our examples
**.bitrate = 1Mbps
# Mandatory physical layer parameters
*.host*.wlan[*].radio.transmitter.maxCommunicationRange = 500m
# Simplify the IdealWirelessNic even further. We do not care even if there are
# transmission collisions. Any number of nodes in range can transmit at the same time
# and the packets will be still successfully delivered.
*.host*.wlan[*].radio.receiver.ignoreInterference = true
# Result: HostA can send data to hostB using almost the whole 1Mbps bandwidth.
Corresponding NED file:
package inet.tutorials.wireless;
import inet.networklayer.configurator.ipv4.IPv4NetworkConfigurator;
import inet.node.inet.INetworkNode;
import inet.physicallayer.contract.packetlevel.IRadioMedium;
// - create a network and specify the size to 500x500
// - drop an IPv4NetworkConfigurator and rename it to "configurator"
// - drop an IdealRadioMedium module and rename to "radioMedium"
// - drop two standardhosts at the 100,100 and 400,400 position and
// rename them to hostA and hostB
network WirelessA
{
#display("bgb=500,500");
#figure[thruputInstrument](type=gauge; pos=370,90; size=120,120; maxValue=2500; tickSize=500; colorStrip=green 0.75 yellow 0.9 red;label=Number of packets received; moduleName=hostB.udpApp[0]; signalName=rcvdPk);
string hostType = default("WirelessHost");
string mediumType = default("IdealRadioMedium");
submodules:
configurator: IPv4NetworkConfigurator {
#display("p=149,29");
}
radioMedium: <mediumType> like IRadioMedium {
#display("p=309,24");
}
hostA: <hostType> like INetworkNode {
#display("p=50,250");
}
hostB: <hostType> like INetworkNode {
#display("p=450,250");
}
}
Edit*: Here is the full config file:
tier1.sources = source1
tier1.channels = channel1
tier1.sinks = sink1
tier1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
tier1.sources.source1.command = /usr/bin/vmstat 1
tier1.sources.source1.channels = channel1
tier1.sources.source1.kafka.consumer.timeout.ms = 20000000
tier1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
tier1.channels.channel1.capacity = 10000
tier1.channels.channel1.transactionCapacity = 1000
tier1.channels.channel1.brokerList= ip.address:9092
tier1.channels.channel1.topic= test1
tier1.channels.channel1.zookeeperConnect=ip.address:2181
tier1.channels.channel1.parseAsFlumeEvent=false
tier1.sinks.sink1.type = hdfs
tier1.sinks.sink1.hdfs.path = /user/flume/
tier1.sinks.sink1.hdfs.rollInterval = 5000
tier1.sinks.sink1.hdfs.rollSize = 5000
tier1.sinks.sink1.hdfs.rollCount = 1000
tier1.sinks.sink1.hdfs.idleTimeout= 10
tier1.sinks.sink1.hdfs.maxOpenFiles=1
tier1.sinks.sink1.hdfs.fileType = DataStream
tier1.sinks.sink1.channel = channel1
I didn't have idleTimeout and maxOpenFiles till recently. So it wasn't working even with the default configurations for those 2 options.
Question on using Flume to aggregate Kafka data. Currently, Flume is creating a new file every second for reading in streaming data. These are my settings:
tier1.sinks.sink1.hdfs.rollInterval = 500 (should be 500 seconds)
tier1.sinks.sink1.hdfs.rollSize = 5000 (should be bytes)
tier1.sinks.sink1.hdfs.rollCount = 1000 (number of events)
The one setting I'm not completely sure on is rollCount, so some additional info:
i'm getting 80 bytes/second, some of my files are 80 bytes with 2 messages, some are 160 bytes, but with 4 messages. So it's not doing it based off time or size, so it may have to be with count, but I don't see why such small messages would register as 1000 events?
Thank you for the help!
Could the rollInterval be milliseconds? I think I may have had this issue before.