I am new to kafka and have tried methods present online , like putting property compression.type="snappy"(even gzip and lz4). compression.codec=snappy . Still it's not working.
I tried to put linger.ms = 5000 and then the compression type.
But nothing of above is working.The message size is not getting compressed.
Related
I have an issue trying to decompress an imap message compressed using deflate method. The things I've tryed so far were isolating one of the directions of an IMAP conversation (using wireshark's follow tcp function) and saving the message data in an raw format that I hope it contains only the deflated message part. I then found some programs like tinf (1st and 3rd example) and miniz (tgunzip example) and tryed to inflate back that file, but with no succes.
I am missing something? Thank you in advance.
tinf - http://www.ibsensoftware.com/download.html
Miniz - https://code.google.com/archive/p/miniz/source/default/source
Try piping that raw data to:
perl -MCompress::Zlib -pe 'BEGIN{$i = inflateInit(-WindowBits => -15)}
$_=$i->inflate($_)'
The important part is the -WindowBits => -15 that changes the expected format into a raw one without adler checksum.
(that's derived from the dovecot source, works for me on Thunderbird to gmail network capture).
From RFC4978 that specifies IMAP compression (emphasis mine):
When using the zlib library (see RFC1951), the functions
deflateInit2(), deflate(), inflateInit2(), and inflate() suffice to
implement this extension. The windowBits value must be in the range
-8 to -15, or else deflateInit2() uses the wrong format.
deflateParams() can be used to improve compression rate and resource
use. The Z_FULL_FLUSH argument to deflate() can be used to clear the
dictionary (the receiving peer does not need to do anything).
Here is my previous question: Send data from InputStream over Akka/Spring stream
I have managed to send compressed and encrypted file over Akka stream. Now, I am looking for way to transport metadata along with data, mainly filename and hash (checksum).
My current idea is to use Flow.prepend function and insert metadata before data this way:
filename, that can vary in size but always ends with null byte
fixed size hash (checksum)
data
Then, on receiving end I would have to use Flow.takeWhile twice - once to read filename and second time to read hash, and then just read data. It doesn't really look like elegant solution plus if in future I would like to add more metadata it will become even worse.
I have noticed method Flow.named, however documentation says just:
Add a ``name`` attribute to this Flow.
and I do not know how to use this (and if is it possible to transport filename over it).
Question is: is there better idea to transport metadata along with data over Akka stream than above?
EDIT: Attaching my drawing with idea.
I think prepending the metadata makes sense. A simple approach could be to prepend the metadata using the same framing you use to send the data.
The receiving end will need to know how many metadata blocks are there, and use this information to split it. See example below.
// client end
filenameSrc
.concat(hashSrc)
.concat(dataSrc)
.via(Framing.delimiter(ByteString("\n"), Int.MaxValue, allowTruncation = true))
.via(Tcp().outgoingConnection(???, ???))
.runForeach{ ??? }
// server end
val printMetadata =
Flow.fromGraph(GraphDSL.create() { implicit builder: GraphDSL.Builder[NotUsed] =>
import GraphDSL.Implicits._
val metadataSink = Sink.foreach(println)
val bcast = builder.add(Broadcast[ByteString](2))
bcast.out(0).take(2) ~> metadataSink
FlowShape(bcast.in, bcast.out(1).drop(2).outlet)
})
val handler =
Framing.delimiter(ByteString("\n"), Int.MaxValue)
.via(printMetadata)
.via(???)
This is only one of the many possible approaches to solve this. But whatever solution you choose, the receiver will need to have knowledge of how to extract the metadata from the raw stream of bytes it reads over TCP.
Here I'm facing a issue that I receive message from Kafka source, and write a interceptor to extract two fields(dataSoure and businessType) from the kafka message(json format). Here I'm using gson.fromJson(). But the issue is I got below error.
Here I want to know whether the Flume truncate the Flume event when it exceed a limit? If yes, how to setup it to bigger value. As my kafka message always very long, about 60K bytes.
Looking forward reply. Thanks in advance!
2015-12-09 11:48:05,665 (PollableSourceRunner-KafkaSource-apply)
[ERROR -
org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:153)]
KafkaSource EXCEPTION, {} com.google.gson.JsonSyntaxException:
com.google.gson.stream.MalformedJsonException: Unterminated string at
line 1 column 4096
at com.google.gson.Gson.fromJson(Gson.java:809)
at com.google.gson.Gson.fromJson(Gson.java:761)
at com.google.gson.Gson.fromJson(Gson.java:710)
at com.xxx.flume.interceptor.JsonLogTypeInterceptor.intercept(JsonLogTypeInterceptor.java:43)
at com.xxx.flume.interceptor.JsonLogTypeInterceptor.intercept(JsonLogTypeInterceptor.java:61)
at org.apache.flume.interceptor.InterceptorChain.intercept(InterceptorChain.java:62)
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:146)
at org.apache.flume.source.kafka.KafkaSource.process(KafkaSource.java:130)
Finally, I find the root cause by debug the source code.
It is becaues I tried to convert event.getBody() to a map using Gson, which is incorrect, as the event.getBody() is a byte[], not a String, which can't be converted. The correct code should be as below:
String body = new String(event.getBody(), "UTF-8");
Map<String, Object> map = gson.fromJson(body, new TypeToken<Map<String, Object>>() {}.getType());
I have a toy setup sending log4j messages to hdfs using flume. I'm not able to configure the hdfs sink to avoid many small files. I thought I could configure the hdfs sink to create a new file every-time the file size reaches 10mb, but it is still creating files around 1.5KB.
Here is my current flume config:
a1.sources=o1
a1.sinks=i1
a1.channels=c1
#source configuration
a1.sources.o1.type=avro
a1.sources.o1.bind=0.0.0.0
a1.sources.o1.port=41414
#sink config
a1.sinks.i1.type=hdfs
a1.sinks.i1.hdfs.path=hdfs://localhost:8020/user/myName/flume/events
#never roll-based on time
a1.sinks.i1.hdfs.rollInterval=0
#10MB=10485760
a1.sinks.il.hdfs.rollSize=10485760
#never roll base on number of events
a1.sinks.il.hdfs.rollCount=0
#channle config
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
a1.sources.o1.channels=c1
a1.sinks.i1.channel=c1
It is your typo in conf.
#sink config
a1.sinks.i1.type=hdfs
a1.sinks.i1.hdfs.path=hdfs://localhost:8020/user/myName/flume/events
#never roll-based on time
a1.sinks.i1.hdfs.rollInterval=0
#10MB=10485760
a1.sinks.il.hdfs.rollSize=10485760
#never roll base on number of events
a1.sinks.il.hdfs.rollCount=0
where in the line 'rollSize' and 'rollCount', you put il as i1.
Please try to use DEBUG, then you will find like:
[SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.shouldRotate:465) - rolling: rollSize: 1024, bytes: 1024
Due to il, default value of rollSize 1024 is being used .
HDFS Sink has a property hdfs.batchSize (default 100) which describes "number of events written to file before it is flushed to HDFS". I think that's your problem here.
Consider also checking all other properties: HDFS Sink .
This can possibly happen because of the memory channel and its capacity. I guess its dumping data to HDFS as soon as its capacity becomes full. Did you try using file channel instead of memory ?
My service occasionally fetches some content (mostly image files) in batches (20 at a time), in a concurrent manner. Sometimes, some of these image files end up corrupted (browser doesn't render them), not sure why, but it only happens when downloading in larger batches. How to check programmatically if the download was corrupted so I can restart it?
I use Play2 WS on Scala. Iteratees not used.
stackoverflow.com/questions/304268/getting-a-files-md5-checksum-in-java
try retrieving the md5 checksum. would that work for you?
Use DigestUtils from Apache Commons Codec library:
FileInputStream fis = new FileInputStream(new File("foo"));
String md5 = org.apache.commons.codec.digest.DigestUtils.md5Hex(fis);