Apache kafka producer does not store data - amazon-web-services

I am trying to access kafka deployed on AWS server with public IP , however when trying to connect it and send some data i receive no response and the server connection is closed.Following is my producer code --
public SensorDevice() {
Properties props = new Properties();
props.put("metadata.broker.list", "myip-xyz:9092");
props.put("bootstrap.servers", "myip-xyz:9092");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
// props.put("partitioner.class", "example.producer.SimplePartitioner");
props.put("request.required.acks", "1");
producer = new KafkaProducer<String, String>(props);
}
public void run() {
Object objectData = new Object();
ProducerRecord<String, String> data = new ProducerRecord<String, String>(
topic, "mytopic", objectData.toString());
System.out.println(data);
Future<RecordMetadata> rs = producer.send(data,
new org.apache.kafka.clients.producer.Callback() {
#Override
public void onCompletion(RecordMetadata recordMetadata,
Exception arg1) {
System.out.println("Received ack for partition="
+ recordMetadata.partition() + " offset = "
+ recordMetadata.offset());
}
});
try {
String msg = "";
RecordMetadata rm = rs.get();
msg = msg + " partition = " + rm.partition() + " offset ="
+ rm.offset();
System.out.println(msg);
} catch (Exception e) {
System.out.println(e);
}
producer.close();
}
I have also tried adding advertise.host.name to server.properties config file.
Kafka shows following error --
> [2015-04-24 09:06:35,329] INFO Created log for partition [mytopic,0] in /tmp/kafka-logs with properties {segment.index.bytes ->
> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 1073741824,
> flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000,
> index.interval.bytes -> 4096, retention.bytes -> -1,
> min.insync.replicas -> 1, cleanup.policy -> delete,
> unclean.leader.election.enable -> true, segment.ms -> 604800000,
> max.message.bytes -> 1000012, flush.messages -> 9223372036854775807,
> min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000,
> segment.jitter.ms -> 0}. (kafka.log.LogManager)
> [2015-04-24 09:06:35,330] WARN Partition [mytopic,0] on broker 0: No checkpointed highwatermark is found for partition [mytopic,0]
> (kafka.cluster.Partition)
> [2015-04-24 09:07:34,788] INFO Closing socket connection to /50.156.87.157. (kafka.network.Processor)
Please help me resolve this issue!

EC2 IP addresses are internal. You may face some issues when dealing with EC2 server running kafka and zookeeper. Try setting advertised.host.name and advertised.port variables in your server.properties file.
advertised.host.name should be IP address of the EC2 server.
advertised.port should be kafka port. By default it is 9092.

Related

how to run cloudTask in localhost java11 standard gcp

I have google app engine project taht i finish upgrade from java 8 to java 11.
I also upgraded TaskQueque to cloudTask and every working but how i can run cloudTask on localHost?
this is my old code
Queue queue = QueueFactory.getDefaultQueue();
TaskOptions taskOptions = TaskOptions.Builder.withUrl("/" + backgroundTaskParams.getUrl());
taskOptions.retryOptions(RetryOptions.Builder.withTaskRetryLimit(backgroundTaskParams.getRetryLimit()));
taskOptions.countdownMillis(backgroundTaskParams.getStartDelay());
Map<String, String> params = backgroundTaskParams.getParams();
if(params != null){
for (String key : params.keySet()) {
taskOptions.param(key, params.get(key));
}
}
TaskHandle taskHandle = queue.add(taskOptions);
is not working on java 11 standard gcp
so I upgrade to this code
try (CloudTasksClient client = CloudTasksClient.create()) {
RetryConfig retyConfig = RetryConfig.builder()
.setMaxRetries(backgroundTaskParams.getRetryLimit())
.build();
String queueName = QueueName.of(EMF.getProjectId(), "us-central1", "default").toString();
String payload = Utilities.buildUrlParams(backgroundTaskParams.getParams());
long startTime = (ApplicationServicesManager.getInstance().getTimeManager().getCurrentTime() + backgroundTaskParams.getStartDelay()) / 1000;
Task.Builder taskBuilder = Task.newBuilder();
taskBuilder.setAppEngineHttpRequest(
AppEngineHttpRequest.newBuilder()
.setRelativeUri("/" + backgroundTaskParams.getUrl())
.setHttpMethod(HttpMethod.POST)
.setBody(ByteString.copyFrom(payload, Charset.defaultCharset()))
.build());
taskBuilder.setScheduleTime(
Timestamp.newBuilder()
.setSeconds(startTime));
taskBuilder.setDispatchCount(1);
Task task = taskBuilder.build();
Task taskResponse = client.createTask(queueName, task);
}
//
catch (Exception e) {
NotificationsManager.sendLogException(e);
}
the new code working fine in production server but when I run in localhost
the code create task in production server so i cannot debugging and is not good :)
so how I can to start task in localHost
thank you

I am trying to create a producer for AWS MSK using Springboot app, able to create it from EC2 client(Using kafka-console-producer.sh)

while producing message to msk(kafka 2.1.0) I am getting
"Exception thrown when sending a message with key='null' and payload='Message->0' to topic AWSKafkaTopic"
I am trying to produce it from a springboot app deployed on EC2 using docker.
But the producer is working fine when I am trying to produce the message from same EC2 client using kafka-console-producer.sh.
bin/kafka-console-producer.sh --broker-list "XXBootstrapBrokerStringTlsXX" --producer.config client.properties --topic AWSKafkaTopic
I have tried the same program on my local with kafka 2.3.0 and zookeeper, it is working fine there(running springboot app on docker).
Config->
#Value("${spring.kafka.producer.bootstrap-servers}")
private String bootstrapServers;
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,StringSerializer.class); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
return props;
}
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public Sender sender() {
return new Sender();
}
Client->
#Autowired
private KafkaTemplate<String,String> kafkaTemplate;
public void sendMessage(String message){
this.kafkaTemplate.send("AWSKafkaTopic",message);
}
Actual result->
ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [XXBootstrapBrokerStringTlsXX]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
Log:
2019-07-24 07:40:43.305 INFO 1 --- [nio-9000-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka version : 2.0.1
2019-07-24 07:40:43.305 INFO 1 --- [nio-9000-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : fa14705e51bd2ce5
2019-07-24 07:41:43.313 ERROR 1 --- [nio-9000-exec-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='Message->0' to topic AWSKafkaTopic:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
In my case i tried to produce the message in new topic but auto.create.topics.enable was false in the aws broker so better to create a message on existing topic or auto.create.topics.enable set this property as true and try.

akka stream custom graph stage

I have an akka stream from a web-socket like akka stream consume web socket and would like to build a reusable graph stage (inlet: the stream, FlowShape: add an additional field to the JSON specifying origin i.e.
{
...,
"origin":"blockchain.info"
}
and an outlet to kafka.
I face the following 3 problems:
unable to wrap my head around creating a custom Inlet from the web socket flow
unable to integrate kafka directly into the stream (see the code below)
not sure if the transformer to add the additional field would be required to deserialize the json to add the origin
The sample Project (flow only) looks like:
import system.dispatcher
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
val incoming: Sink[Message, Future[Done]] =
Flow[Message].mapAsync(4) {
case message: TextMessage.Strict =>
println(message.text)
Future.successful(Done)
case message: TextMessage.Streamed =>
message.textStream.runForeach(println)
case message: BinaryMessage =>
message.dataStream.runWith(Sink.ignore)
}.toMat(Sink.last)(Keep.right)
val producerSettings = ProducerSettings(system, new ByteArraySerializer, new StringSerializer)
.withBootstrapServers("localhost:9092")
val outgoing = Source.single(TextMessage("{\"op\":\"unconfirmed_sub\"}")).concatMat(Source.maybe)(Keep.right)
val webSocketFlow = Http().webSocketClientFlow(WebSocketRequest("wss://ws.blockchain.info/inv"))
val ((completionPromise, upgradeResponse), closed) =
outgoing
.viaMat(webSocketFlow)(Keep.both)
.toMat(incoming)(Keep.both)
// TODO not working integrating kafka here
// .map(_.toString)
// .map { elem =>
// println(s"PlainSinkProducer produce: ${elem}")
// new ProducerRecord[Array[Byte], String]("topic1", elem)
// }
// .runWith(Producer.plainSink(producerSettings))
.run()
val connected = upgradeResponse.flatMap { upgrade =>
if (upgrade.response.status == StatusCodes.SwitchingProtocols) {
Future.successful(Done)
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
system.terminate
}
}
// kafka that works / writes dummy data
val done1 = Source(1 to 100)
.map(_.toString)
.map { elem =>
println(s"PlainSinkProducer produce: ${elem}")
new ProducerRecord[Array[Byte], String]("topic1", elem)
}
.runWith(Producer.plainSink(producerSettings))
One issue is around the incoming stage, which is modelled as a Sink. where it should be modelled as a Flow. to subsequently feed messages into Kafka.
Because incoming text messages can be Streamed. you can use flatMapMerge combinator as follows to avoid the need to store entire (potentially big) messages in memory:
val incoming: Flow[Message, String, NotUsed] = Flow[Message].mapAsync(4) {
case msg: BinaryMessage =>
msg.dataStream.runWith(Sink.ignore)
Future.successful(None)
case TextMessage.Streamed(src) =>
src.runFold("")(_ + _).map { msg => Some(msg) }
}.collect {
case Some(msg) => msg
}
At this point you got something that produces strings, and can be connected to Kafka:
val addOrigin: Flow[String, String, NotUsed] = ???
val ((completionPromise, upgradeResponse), closed) =
outgoing
.viaMat(webSocketFlow)(Keep.both)
.via(incoming)
.via(addOrigin)
.map { elem =>
println(s"PlainSinkProducer produce: ${elem}")
new ProducerRecord[Array[Byte], String]("topic1", elem)
}
.toMat(Producer.plainSink(producerSettings))(Keep.both)
.run()

Concurrence in Erlang

I'm new in erlang and I'm trying to implement a register/login server. I have a function to register new users, and it works well:
reg(Sock) ->
receive
{tcp, _, Usr} ->
io:format("User: ~p ~n",[Usr])
end,
gen_tcp:send(Sock, [Usr]),
receive
{tcp, _, Pass} ->
io:format(" a pass ~p~n",[Pass])
end,
gen_tcp:send(Sock, [Pass]),
receive
{tcp, _, Msg} ->
case Msg of
<<"condutor\n">> ->
condutor ! {register, {Usr, Pass}};
<<"passageiro\n">> ->
io:format("passageiro~n")
end
end.
But now I want to have another function that controlls if a user wants to login or register, and send the proper function. But when I add this function it doesn't read the input of the user:
gestor(Sock) ->
receive
{tcp, _, Msg} ->
case Msg of
<<"login\n">> ->
login(Sock);
<<"registo\n">> ->
gen_tcp:send(Sock, "OK"),
reg(Sock)
end
end.
It receives the option of the user, sends him to the right function but then it doesn't read anything, I can't understand this because if I call the function reg directly it works fine, but if I call that function from another function, I can't read nothing from the socket.
If anyone could help me I would apreciate it very much.
EDITED:
Thanks a lot for your replies, I'm trying to implement a Java client and a Erlang server communicating through a Tcp socket, it's intended that the user types "registo" than a username, a password, and "condutor" or "passageiro".
The socket works fine because if I call reg(sock) instead of gestor(Sock) everything works as expected, the problem is when I call reg(Sock) inside gestor(Sock) in this case I can't receive the user input in reg function.
-module(server2).
-export([server/1]).
server(Port) ->
{ok, LSock} = gen_tcp:listen(Port, [binary, {packet, line}, {reuseaddr, true}]),
Condutor = spawn(fun()-> regUtente([]) end),
register(condutor, Condutor),
acceptor(LSock).
acceptor(LSock) ->
{ok, Sock} = gen_tcp:accept(LSock),
spawn(fun() -> acceptor(LSock) end),
io:format("Ligação estabelecida~n"),
% reg(Sock). --- Caling reg directly, without passing through gestor WORKS FINE.
gestor(Sock).
Java Client:
import java.io.*;
import java.net.*;
public class EchoClient {
public static void main(String[] args) throws IOException {
if (args.length != 2) {
System.err.println(
"Usage: java EchoClient <host name> <port number>");
System.exit(1);
}
String hostName = args[0];
int portNumber = Integer.parseInt(args[1]);
try (
Socket echoSocket = new Socket(hostName, portNumber);
PrintWriter out =
new PrintWriter(echoSocket.getOutputStream(), true);
BufferedReader in =
new BufferedReader(
new InputStreamReader(echoSocket.getInputStream()));
BufferedReader stdIn =
new BufferedReader(
new InputStreamReader(System.in))
) {
String userInput;
while ((userInput = stdIn.readLine()) != null) {
out.println(userInput);
System.out.println("echo: " + in.readLine());
}
} catch (UnknownHostException e) {
System.err.println("Don't know about host " + hostName);
System.exit(1);
} catch (IOException e) {
System.err.println("Couldn't get I/O for the connection to " +
hostName);
System.exit(1);
}
}
}
Send "OK\n" as a response to "registo." Your java code is using readline so you need a line terminator.

After changing a key value from a machine2, Not getting the changed value from Machine1

I did a sample application & run the application from 2 different machines where both application is using AppFabric cache. I set the pollInterval="120" secs in both applications config file with below settings:
<localCache isEnabled="true"
sync="NotificationBased"
ttlValue="300"
objectCount="10"/>
<!--(optional) specify cache notifications poll interval-->
<clientNotification pollInterval="120" />
Also enabled Notification in cluster using powershell.
Now from Machine1 I read the key called key1 whose value is "Value1".
then from Machine2 I changed the value of key1 to "Changed".
then from Machine2 I read the key called key1 whose value is now displayed as "Changed".
then after the poll interval period which is 2 mnts I read the key called key1 from Machine1, whose value is now displayed still as "Value1". Why it's not displaying "Changed".
Why the change is not detected by the application in Machine1? Why the local cache invalidation not occurring?
At Ahmed Ilyas:>
show the code you are using to read and write to the cache. you also have not explained how you configured AppFabric and these machines. are they joined to the cluster?
I have done reading through AFC Read-Through API. which is done in separate project. Write to cache is done just by Put() method. As this is a sample project, so I though no need to update the database only update at cache cluster.
The above config settings for each application running in 2 machines.
I have allowed access for these 2 machines by granting access to them in cache cluster. 1 machine is both AFC server & cache client(i.e Machine1).
Hope this helps you to answer. Find the code below:
public class CacheUtil
{
private static DataCacheFactory _factory = null;
private static DataCache _cache = null;
static CacheUtil()
{
if (_cache == null)
{
// Declare array for cache host(s).
DataCacheServerEndpoint[] servers = new DataCacheServerEndpoint[1];
servers[0] = new DataCacheServerEndpoint("H1011.hoboo.net", 22233);
// Set the local cache properties. In this example, it
// is timeout-based with a timeout of 300 seconds(5mnts).
DataCacheLocalCacheProperties localCacheConfig;
TimeSpan localTimeout = new TimeSpan(0, 5, 0);
localCacheConfig = new DataCacheLocalCacheProperties(60, localTimeout, DataCacheLocalCacheInvalidationPolicy.TimeoutBased);
// Setup the DataCacheFactory configuration.
DataCacheFactoryConfiguration factoryConfig = new DataCacheFactoryConfiguration();
//factoryConfig.ChannelOpenTimeout = new TimeSpan(0, 0, 0);
//factoryConfig.Servers = servers;
//factoryConfig.LocalCacheProperties = localCacheConfig;
_factory = new DataCacheFactory();
//_factory = new DataCacheFactory(factoryConfig);
_cache = _factory.GetCache("default");
}
}
public static DataCache GetCache()
{
if (_cache != null) return _cache;
try
{
RuntimeContext.WriteAppFabricErrorLog(new AppFabricLogger()
{
CacheKey = "Connected to AppFabric Cache Server.",
CacheData = "Connected to AppFabric Cache Server.",
ErrorString = "Connected to AppFabric Cache Server."
});
}
catch (Exception ex)
{
//Suppress Error
}
return _cache;
}
}
Other class which has Get():>
public static object Get(string pName)
{
object cachedItem = null;
try
{
//Check configuration settings for AppFabric.
bool appFabricCache;
bool.TryParse(System.Configuration.ConfigurationManager.AppSettings["AppFabricCache"], out appFabricCache);
if (appFabricCache)
{
//Get data from AppFabric Cache Server.
cachedItem = CacheUtil.GetCache().Get(pName);
}
else
{
//Get data from Local Cache Server.
cachedItem = RuntimeContextOlderVersion.Get(pName);
}
}
catch (Exception Ex)
{
//If failes, write reason to log file.
WriteAppFabricErrorLog(new AppFabricLogger()
{
CacheKey = pName,
CacheData = "Get Method",
ErrorString = Ex.ToString()
});
}
return cachedItem;
}
#stuartd
Yes I have enabled notifictions. You can see that in my appconfig.
For Staurt:>
PS C:\Windows\system32> get-cacheconfig
cmdlet Get-CacheConfig at command pipeline position 1
Supply values for the following parameters:
CacheName: default
CacheName : default
TimeToLive : 10 mins
CacheType : Partitioned
Secondaries : 0
MinSecondaries : 0
IsExpirable : True
EvictionType : LRU
NotificationsEnabled : True
WriteBehindEnabled : False
WriteBehindInterval : 300
WriteBehindRetryInterval : 60
WriteBehindRetryCount : -1
ReadThroughEnabled : True
ProviderType : SampleProvider.Provider,SampleProvider, Version=1.0.
0.0, Culture=neutral, PublicKeyToken=cde85af3c5f6411
e
ProviderSettings : {"DBConnection"="Database=Test123;Server=..**.
**;uid=****;pwd=*****;connection timeout=5000"}