nslookup showing a lot of information - nslookup

I am taking a CS course and we're looking into the nslookup command. When my instructor does it he gets only the non authoritative results. When I type it, I get a ton of info with the info I'm looking for based on the -type== option than I input hidden amongst it. Here's my output. is this normal?
I ran nslookup -type==NS starwars.com
main parsing starwars.com
addlookup()
make_empty_lookup()
make_empty_lookup() = 0x7f9118d9e000->references = 1
looking up starwars.com
lock_lookup dighost.c:4184
success
start_lookup()
setup_lookup(0x7f9118d9e000)
resetting lookup counter.
cloning server list
clone_server_list()
make_server(75.75.75.75)
make_server(75.75.76.76)
idn_textname: starwars.com
using root origin
recursive query
add_question()
starting to render the message
done rendering
create query 0x7f9117a2d000 linked to lookup 0x7f9118d9e000
dighost.c:2083:lookup_attach(0x7f9118d9e000) = 2
dighost.c:2587:new_query(0x7f9117a2d000) = 1
create query 0x7f9117a2d1c0 linked to lookup 0x7f9118d9e000
dighost.c:2083:lookup_attach(0x7f9118d9e000) = 3
dighost.c:2587:new_query(0x7f9117a2d1c0) = 1
do_lookup()
start_udp(0x7f9117a2d000)
dighost.c:2936:query_attach(0x7f9117a2d000) = 2
working on lookup 0x7f9118d9e000, query 0x7f9117a2d000
dighost.c:2981:query_attach(0x7f9117a2d000) = 3
unlock_lookup dighost.c:4186
dighost.c:2898:query_attach(0x7f9117a2d000) = 4
recving with lookup=0x7f9118d9e000, query=0x7f9117a2d000, handle=(nil)
recvcount=1
have local timeout of 5000
dighost.c:2847:query_attach(0x7f9117a2d000) = 5
sending a request
sendcount=1
dighost.c:1676:query_detach(0x7f9117a2d000) = 4
dighost.c:2918:query_detach(0x7f9117a2d000) = 3
send_done(0x7f9117a8d000, success, 0x7f9117a2d000)
sendcount=0
lock_lookup dighost.c:2615
success
dighost.c:2629:lookup_attach(0x7f9118d9e000) = 4
dighost.c:2648:query_detach(0x7f9117a2d000) = 2
dighost.c:2649:lookup_detach(0x7f9118d9e000) = 3
check_if_done()
list empty
unlock_lookup dighost.c:2652
recv_done(0x7f9117a8d000, success, 0x7f91187fa010, 0x7f9117a2d000)
lock_lookup dighost.c:3577
success
recvcount=0
dighost.c:3589:lookup_attach(0x7f9118d9e000) = 4
before parse starts
after parse
printmessage()
Server: 75.75.75.75
Address: 75.75.75.75#53
Non-authoritative answer:
printsection()
starwars.com nameserver = a28-65.akam.net.
starwars.com nameserver = a9-66.akam.net.
starwars.com nameserver = a13-67.akam.net.
starwars.com nameserver = a12-66.akam.net.
starwars.com nameserver = a18-64.akam.net.
starwars.com nameserver = a1-127.akam.net.
Authoritative answers can be found from:
printsection()
printsection()
a9-66.akam.net internet address = 184.85.248.66
a9-66.akam.net has AAAA address 2a02:26f0:117::42
a13-67.akam.net internet address = 2.22.230.67
a13-67.akam.net has AAAA address 2600:1480:800::43
a12-66.akam.net internet address = 184.26.160.66
a18-64.akam.net internet address = 95.101.36.64
a1-127.akam.net internet address = 193.108.91.127
a1-127.akam.net has AAAA address 2600:1401:2::7f
a28-65.akam.net internet address = 95.100.173.65
still pending.
dighost.c:4079:query_detach(0x7f9117a2d000) = 1
dighost.c:4081:_cancel_lookup()
dighost.c:2669:query_detach(0x7f9117a2d000) = 0
dighost.c:2669:destroy_query(0x7f9117a2d000) = 0
dighost.c:1634:lookup_detach(0x7f9118d9e000) = 3
dighost.c:2669:query_detach(0x7f9117a2d1c0) = 0
dighost.c:2669:destroy_query(0x7f9117a2d1c0) = 0
dighost.c:1634:lookup_detach(0x7f9118d9e000) = 2
check_if_done()
list empty
dighost.c:4087:lookup_detach(0x7f9118d9e000) = 1
clear_current_lookup()
dighost.c:1759:lookup_detach(0x7f9118d9e000) = 0
destroy_lookup
freeing server 0x7f9117a12000 belonging to 0x7f9118d9e000
freeing server 0x7f9117a12a00 belonging to 0x7f9118d9e000
start_lookup()
check_if_done()
list empty
shutting down
dighost_shutdown()
unlock_lookup dighost.c:4091
done, and starting to shut down
cancel_all()
lock_lookup dighost.c:4200
success
unlock_lookup dighost.c:4231
destroy_libs()
freeing task
lock_lookup dighost.c:4251
success
flush_server_list()
destroy DST lib
unlock_lookup dighost.c:4279
Removing log context
Destroy memory
Just seeing if this is the normal output, because on my instructors screen, he only gets the Authoritative and Non Authoritative sections.

Looks like it relates to this: https://bugs.kali.org/view.php?id=7522
Try adding -nod2 when you run the command.

Related

SCSIControllerDriverKit: Process gets stuck on UserCreateTargetForID

Context:
We are working on migration of the driver, which is currently represented as a kernel extension, to the DriverKit framework.
The driver works with Thunderbolt RAID storage devices.
When connected through the Thunderbolt interface to the host, the device itself presents in the OS as a PCI device. The main function of our driver (.kext) is to create a "virtual" SCSI device in the OS for each virtual RAID array. So that the OS can work with these SCSI drives as usual disk storage.
We use https://developer.apple.com/documentation/scsicontrollerdriverkit to migrate this functionality in the dext version of the driver.
Current issue:
When a device is connected - the dext driver cannot create a SCSI drive in the OS.
Technically our dext tries to create a SCSI drive using the UserCreateTargetForID() method.
On this step the OS sends the first SCSI command "Test Unit Ready" to the device to check if it is a SCSI device.
We process this command in an additional thread separated from the main process of the dext (as it is recommended in the DriverKit documentation).
We can see in the logs that the device receives this command and responses but when our dext sends this response to the OS the process is stuck in the waiting mode. How can we understand why it happens and fix it?
More details:
We are migrating functionality of an already existing “.kext” driver. We checked logs of the kext driver of this step:
15:06:17.902539+0700 Target device try to create for idx:0
15:06:17.902704+0700 Send command 0 for target 0 len 0
15:06:18.161777+0700 Complete command: 0 for target: 0 Len: 0 status: 0 flags: 0
15:06:18.161884+0700 Send command 18 for target 0 len 6
15:06:18.161956+0700 Complete command: 18 for target: 0 Len: 6 status: 0 flags: 0
15:06:18.162010+0700 Send command 18 for target 0 len 44
15:06:18.172972+0700 Complete command: 18 for target: 0 Len: 44 status: 0 flags: 0
15:06:18.275501+0700 Send command 18 for target 0 len 36
15:06:18.275584+0700 Complete command: 18 for target: 0 Len: 36 status: 0 flags: 0
15:06:18.276257+0700 Target device created for idx:0
We can see a successful message “Target device created for idx:0”
In the the dext logs of the same step:
We do not see the “Send command 18 for target 0 len 6” as we have in the kext logs
no log of the successful result “Target device created for idx:0”
I'll add a thread name to each line of the dext log (CustomThread,DefaultQueue,SendCommandCustomThread,InterruptQueue):
15:54:10.903466+0700 Try to create target for 0 UUID 432421434863538456 - CustomThread
15:54:10.903633+0700 UserDoesHBAPerformAutoSense - DefaultQueue
15:54:10.903763+0700 UserInitializeTargetForID - DefaultQueue
15:54:10.903876+0700 UserDoesHBASupportMultiPathing DefaultQueue
15:54:10.904200+0700 UserProcessParallelTask start - DefaultQueue
15:54:10.904298+0700 Sent command : 0 len 0 for target 0 - SendCommandCustomThread
15:54:11.163003+0700 Disable interrupts - InterruptQueue
15:54:11.163077+0700 Complete cmd : 0 for target: 0 len: 0 status: 0 flags: 0 - InterruptQueue
15:54:11.163085+0700 Enable interrupts - InterruptQueue
Code for complete task
SCSIUserParallelResponse osRsp = {0};
osRsp.fControllerTaskIdentifier = osTask->taskId;
osRsp.fTargetID = osTask->targetId;
osRsp.fServiceResponse = kSCSIServiceResponse_TASK_COMPLETE;
osRsp.fCompletionStatus = (SCSITaskStatus) response->status;
// Transfer length computation.
osRsp.fBytesTransferred = transferLength; // === 0 for this case.
ParallelTaskCompletion(osTask->action, osRsp);
osTask->action->release();
Will appreciate any help
This is effectively a deadlock, which you seem to have already worked out. It's not 100% clear from your your question, but as I initially had the same problem, I assume you're calling UserCreateTargetForID from the driver's default queue. This won't work, you must call it from a non-default queue because SCSIControllerDriverKit assumes that your default queue is idle and ready to handle requests from the kernel while you are calling this function. The header docs are very ambiguous on this, though they do mention it:
The dext class should call this method to create a new target for the
targetID. The framework ensures that the new target is created before the call returns.
Note that this call to the framework runs on the Auxiliary queue.
SCSIControllerDriverKit expects your driver to use 3 different dispatch queues (default, auxiliary, and interrupt), although I think it can be done with 2 as well. I recommend you (re-)watch the relevant part of the WWDC2020 session video about how Apple wants you to use the 3 dispatch queues, exactly. The framework does not seem to be very flexible on this point.
Good luck with the rest of the driver port, I found this DriverKit framework even more fussy than the other ones.
Thanks to pmdj for direction of think. For my case answer is just add initialization for version field for response.
osRsp.version = kScsiUserParallelTaskResponseCurrentVersion1;
It looks obvious. But there are no any information in docs or WWDC2020 video about initialization version field.
My project is hardware raid 'user space driver' . My driver has now completed the io stress test. Your problem should be in the SCSI command with data transfer. And you want to send data to the system by your software driver to complete the SCSI ' inquiry ' command. I think you also used 'UserGetDataBuffer'. It seems to be some distance from iokit's function.
kern_return_t IMPL ( XXXXUserSpaceDriver, UserProcessParallelTask )
{
/*
**********************************************************************
** UserGetDataBuffer
**********************************************************************
*/
if(parallelTask.fCommandDescriptorBlock[0] == SCSI_CMD_INQUIRY)
{
IOBufferMemoryDescriptor *data_buffer_memory_descriptor = nullptr;
/*
******************************************************************************************************************************************
** virtual kern_return_t UserGetDataBuffer(SCSIDeviceIdentifier fTargetID, uint64_t fControllerTaskIdentifier, IOBufferMemoryDescriptor **buffer);
******************************************************************************************************************************************
*/
if((UserGetDataBuffer(parallelTask.fTargetID, parallelTask.fControllerTaskIdentifier, &data_buffer_memory_descriptor) == kIOReturnSuccess) && (data_buffer_memory_descriptor != NULL))
{
IOAddressSegment data_buffer_virtual_address_segment = {0};
if(data_buffer_memory_descriptor->GetAddressRange(&data_buffer_virtual_address_segment) == kIOReturnSuccess)
{
IOAddressSegment data_buffer_physical_address_segment = {0};
IODMACommandSpecification dmaSpecification;
IODMACommand *data_buffer_iodmacommand = {0};
bzero(&dmaSpecification, sizeof(dmaSpecification));
dmaSpecification.options = kIODMACommandSpecificationNoOptions;
dmaSpecification.maxAddressBits = 64;
if(IODMACommand::Create(ivars->pciDevice, kIODMACommandCreateNoOptions, &dmaSpecification, &data_buffer_iodmacommand) == kIOReturnSuccess)
{
uint64_t dmaFlags = kIOMemoryDirectionInOut;
uint32_t dmaSegmentCount = 1;
pCCB->data_buffer_iodmacommand = data_buffer_iodmacommand;
if(data_buffer_iodmacommand->PrepareForDMA(kIODMACommandPrepareForDMANoOptions, data_buffer_memory_descriptor, 0/*offset*/, parallelTask.fRequestedTransferCount/*length*/, &dmaFlags, &dmaSegmentCount, &data_buffer_physical_address_segment) == kIOReturnSuccess)
{
parallelTask.fBufferIOVMAddr = (uint64_t)data_buffer_physical_address_segment.address; /* data_buffer_physical_address: overwrite original fBufferIOVMAddr */
pCCB->OSDataBuffer = reinterpret_cast <uint8_t *> (data_buffer_virtual_address_segment.address);/* data_buffer_virtual_address */
}
}
}
}
}
}
response.fBytesTransferred = dataxferlen;
response.version = kScsiUserParallelTaskResponseCurrentVersion1;
response.fTargetID = TARGETLUN2SCSITARGET(TargetID, 0);
response.fControllerTaskIdentifier = pCCB->fControllerTaskIdentifier;
response.fCompletionStatus = taskStatus;
response.fServiceResponse = serviceResponse;
response.fSenseLength = taskStatus;
IOUserSCSIParallelInterfaceController::ParallelTaskCompletion(pCCB->completion, response);
pCCB->completion->release();
pCCB->completion = NULL;
pCCB->ccb_flags.start = 0;/*reset startdone for outstanding ccb check*/
if(pCCB->data_buffer_iodmacommand != NULL)
{
pCCB->data_buffer_iodmacommand->CompleteDMA(kIODMACommandCompleteDMANoOptions);
OSSafeReleaseNULL(pCCB->data_buffer_iodmacommand); // pCCB->data_buffer_iodmacommand->free(); pCCB->data_buffer_iodmacommand = NULL;
pCCB->OSDataBuffer = NULL;
}

Condor master node and workers only see the master node

I am trying to set a HTCondor batch system, but when I do condor_status it only shows the master in both the master and worker nodes. They both show this:
Name OpSys Arch State Activity LoadAv Mem
[master ip] LINUX X86_64 Unclaimed Idle 0.000 973
Total Owner Claimed Unclaimed Matched Preempting Backfill Drain
X86_64/LINUX 1 0 0 1 0 0 0 0
Total 1 0 0 1 0 0 0 0
Condor_restart on the master node works fine, but on the worker nodes yields this error:
ERROR
SECMAN:2010:Received "DENIED" from server for user unauthenticated#unmapped using no authentication method, which may imply host-based security. Our address was '[ip address of master]', and server's address was '[ip address of worker]'. Check your ALLOW settings and IP protocols.
Here are the config files:
of the master node:
CONDOR_HOST = [private ip of master]
DAEMON_LIST = COLLECTOR, MASTER, NEGOTIATOR, SCHEDD, STARTD
# to avoid user authentication
HOSTALLOW_READ = *
HOSTALLOW_WRITE = *
HOSTALLOW_ADMINISTRATOR = *
of the worker node:
CONDOR_HOST = [private ip of master]
DAEMON_LIST = MASTER, STARTD
# to avoid user authentication
HOSTALLOW_READ = *
HOSTALLOW_WRITE = *
HOSTALLOW_ADMINISTRATOR = *
I am allowing on the same security group:
All TCP TCP 0 - 65535
All ICMP-IPv4 All
SSH on port 22
This is how it looks like (security group ending in '6')
Apparently the issue was running condor_reconfig -full. I just reinstalled it without doing that and using systemctl restart condor instead and it worked. If someone wants to bring some insight on why it was so please do so :)

Regex Fail2ban Rule

I wanted to test fail2ban but I'm hitting a problem. My trys with regex does not really works.
The username e.g. userxy which occurs in the the first line of the log and in the second line the IP Address must be blocked by fail2ban.
/etc/fail2ban/jail.conf
[Filter1]
enabled = true
filter = test
port = 12345
protocol = tcp
logpath = /var/etc/logs/auth.log
banaction = iptables-multiport
findtime = 1800
bantime = 3600
/var/etc/logs/auth.log
2018/09/21 20:45:13 ASDFDGFS c (trail) password for 'userxy' invalid!
2018/09/21 20:45:13 ASDFDGFS c (client) anonymous disconnected from 192.168.252.37
/etc/fail2ban/filter.d/test.conf
[Init]
maxlines = 2
[Definition]
failregex = ^.*userxy*\n.*anonymous disconnected from <HOST>.*$
ignoreregex =

Kerberos kinit: Resource temporarily unavailable while getting initial credentials

I am in the process of setting up Kerberos on a CentOS7 (more specific: the Hortonworks HDP 2.3 sandbox) running in a VirtualBox VM. My problem is that kinit seems to be unable to reach my KDC, the answer is "Resource temporarily unavailable while getting inital credentials" if I add an address in my /etc/hosts file and if I leave that file as is I get the message "could not contact any host for realm mycompany while getting initial credentials".
The KDC is running (can find it with ps plus the service starts with an "okay" message), same for kadmin.
As a guide for setting up kerberos I followed these two guides:
CentOS guide
Guide 2
My config files:
krb5.conf
[logging]
default = FILE:/var/log/krb5libs.log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
[libdefaults]
default_realm = MYCOMPANY.COM
dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
MYCOMPANY.COM = {
kdc = kerberos.mycompany.com
admin_server = kerberos.mycompany.com
}
[domain_realm]
.mycompany.com = MYCOMPANY.COM
mycompany.com = MYCOMPANY.COM
kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88,750
[realms]
MYCOMPANY.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
kadm5.acl
*/admin#MYCOMPANY.COM *
/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.96.140 sandbox.hortonworks.com sandbox ambari.hortonworks.com
192.168.1.3 mycompany.com kerberos.mycompany.com
I get the "Resource..." error if I have any address in the third line of the hosts file, if that line is missing I get the "could not contact..." error.
I could trace the kinit command with something along the lines of krb5_trace or something (unfortunately I can't find the link I got it from any more nor remember the exact command) to the address specified in the host file so kinit seems to contact the fitting address, its just that the KDC does not listen there.
Netstat shows that the KDC is listening on the ports specified in the kdc.conf
Any help would be appreciated
Okay so it does work now. Things I did to fix it:
/etc/resolv.conf
mycompany.com 127.0.0.1
/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.96.140 sandbox.hortonworks.com sandbox ambari.hortonworks.com
127.0.0.1 mycompany.com kerberos.mycompany.com
And, most embarrassing: I used kinit mycompany/admin for the principal user/admin#mycompany.com which is of course wrong.
The right call is of course kinit user/admin

Flume roll settings not working

Edit*: Here is the full config file:
tier1.sources = source1
tier1.channels = channel1
tier1.sinks = sink1
tier1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
tier1.sources.source1.command = /usr/bin/vmstat 1
tier1.sources.source1.channels = channel1
tier1.sources.source1.kafka.consumer.timeout.ms = 20000000
tier1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
tier1.channels.channel1.capacity = 10000
tier1.channels.channel1.transactionCapacity = 1000
tier1.channels.channel1.brokerList= ip.address:9092
tier1.channels.channel1.topic= test1
tier1.channels.channel1.zookeeperConnect=ip.address:2181
tier1.channels.channel1.parseAsFlumeEvent=false
tier1.sinks.sink1.type = hdfs
tier1.sinks.sink1.hdfs.path = /user/flume/
tier1.sinks.sink1.hdfs.rollInterval = 5000
tier1.sinks.sink1.hdfs.rollSize = 5000
tier1.sinks.sink1.hdfs.rollCount = 1000
tier1.sinks.sink1.hdfs.idleTimeout= 10
tier1.sinks.sink1.hdfs.maxOpenFiles=1
tier1.sinks.sink1.hdfs.fileType = DataStream
tier1.sinks.sink1.channel = channel1
I didn't have idleTimeout and maxOpenFiles till recently. So it wasn't working even with the default configurations for those 2 options.
Question on using Flume to aggregate Kafka data. Currently, Flume is creating a new file every second for reading in streaming data. These are my settings:
tier1.sinks.sink1.hdfs.rollInterval = 500 (should be 500 seconds)
tier1.sinks.sink1.hdfs.rollSize = 5000 (should be bytes)
tier1.sinks.sink1.hdfs.rollCount = 1000 (number of events)
The one setting I'm not completely sure on is rollCount, so some additional info:
i'm getting 80 bytes/second, some of my files are 80 bytes with 2 messages, some are 160 bytes, but with 4 messages. So it's not doing it based off time or size, so it may have to be with count, but I don't see why such small messages would register as 1000 events?
Thank you for the help!
Could the rollInterval be milliseconds? I think I may have had this issue before.