I have a test which constructs another event sourcing actor from inside message handler and this construction is taking more than 3 seconds. Below is current configuration, how can I increase default timeout?
extends ScalaTestWithActorTestKit(
ConfigFactory.parseString("""
akka.persistence.testkit.events.serialize = off
akka.actor.allow-java-serialization = on
akka.test.single-expect-default = 999s
""").withFallback(PersistenceTestKitPlugin.config).withFallback(ManualTime.config)
Here is the error message:
Timeout (3 seconds) during receiveMessage while waiting for message.
java.lang.AssertionError: Timeout (3 seconds) during receiveMessage while waiting for message.
Try with the constructor: public ScalaTestWithActorTestKit​(com.typesafe.config.Config config)
similar constructor in (it works for me):
extends TestKit(ActorSystem("AliveActorSpec", ConfigFactory.load(ConfigFactory.parseString("""
| akka {
| test {
| single-expect-default = 6s
| }
| }
""".stripMargin))))
Related
I get this error when I launch, from zero, more than 4 process in sync:
{
"insertId": "61a4a4920009771002b74809",
"jsonPayload": {
"asctime": "2021-11-29 09:59:46,620",
"message": "Exception in callback <bound method ResumableBidiRpc._on_call_done of <google.api_core.bidi.ResumableBidiRpc object at 0x3eb1636b2cd0>>: ValueError('Cannot invoke RPC: Channel closed!')",
"funcName": "handle_event",
"lineno": 183,
"filename": "_channel.py"
}
This is the pub-sub schema:
pub-sub-schema
The error seems to happen at step 9 or 10.
The actual code is:
future = publisher.publish(
topic_path,
encoded_message,
msg_attribute=message_key
)
future.add_done_callback(
callback=lambda f:
logging.info(...)
)
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(
PROJECT_ID,
"..."
)
streaming_pull_future = subscriber.subscribe(
subscription_path,
callback=aggregator_callback_handler.handle_message
)
aggregator_callback_handler.callback = streaming_pull_future
wait_result(
timeout=300,
pratica=...,
f_check_res_condition=lambda: aggregator_callback_handler.response is not None
)
streaming_pull_future.cancel()
subscriber.close()
The module aggregator_callback_handler handles .nack and .ack.
The error is returned for some seconds, then the VMs on which the services are hosted scales and the error stops. Same if, instead of launching the processes all together, I scale them manually launching them one by one and leaving some sleep in-between.
I've already checked the timeouts and put the subscriber outside of context manager, but those solutions doesn't work.
Any idea on how to handle this?
I have a following code in my Lambda (Python and Boto3):
rds.restore_db_instance_from_db_snapshot(
DBSnapshotIdentifier=snapshot_name,
DBInstanceIdentifier=db_id,
DBInstanceClass=rds_instance_class,
VpcSecurityGroupIds=secgroup,
DBSubnetGroupName=rds_subnet_groupname,
MultiAZ=False,
PubliclyAccessible=False,
CopyTagsToSnapshot=True
)
waiter = rds.get_waiter('db_instance_available')
waiter.wait(DBInstanceIdentifier=db_id)
# some other operation that expects that DB is up and running.
The waiter was added as an attempt to properly wait for DB. However, it looks like the waiter times out.
What would be the correct waiter to use in this case?
try setting waiter.config.delay and/or waiter.config.max_attempts.
waiter = rds.get_waiter('db_instance_available')
waiter.config.delay = 123 # this is in seconds
waiter.config.max_attempts = 123
waiter.wait(DBInstanceIdentifier=db_id)
OR
waiter = rds.get_waiter('db_instance_available')
waiter.wait(
DBClusterIdentifier=db_id
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
WaiterConfig (dict) A dictionary that provides parameters to control
waiting behavior.
Delay (integer) The amount of time in seconds to wait between
attempts. Default: 30
MaxAttempts (integer) The maximum number of attempts to be made.
Default: 60
Could it be that your waiter is actually checking the existing db and seeing that it's available before the status can update on the previous command to restore the snapshot?
This Erlang based web client issues N concurrent requests to a given URL, e.g., escript client.erl http://127.0.0.1:1234/random?num=1000 3200 issues 3200 concurrent requests. For completeness, a corresponding server can be found here.
The client uses spawn_link to spawn N processes to issue one request each. The success/failure of httpc:request call in the spawned process is handled in the spawned process (within the dispatch_requests function) and propagated to the parent process. The parent process also handles both data messages from a spawned process and EXIT messages corresponding to the abnormal termination of a spawned process. So, the parent process waits to receive N messages from/about the spawned processes before it terminates normally.
Given the above context, the client hangs on some executions (e.g., the server is terminated forcefully) as the parent process never receives N messages from the children processes. I am observing this behavior on Raspberry Pi 3B running Raspbian 9.9 and esl-erlang 22.0-1.
The parent process does not seem to be not handling all cases of termination of child processes. If so, what are these cases? If not, what might be the reason for fewer than N messages?
Client code:
% escript client.erl http://127.0.0.1:1234/random?num=5 30
-module(client).
-export([main/1]).
-import(httpc, [request/1]).
-mode(compile).
dispatch_request(Url, Parent) ->
Start = erlang:monotonic_time(microsecond),
{Status, Value} = httpc:request(get, {Url, []}, [{timeout, 60000}], []),
Elapsed_Time = (erlang:monotonic_time(microsecond) - Start) / 1000,
Msg = case Status of
ok ->
io_lib:format("~pms OK", [Elapsed_Time]);
error ->
io_lib:format("~pms REQ_ERR ~p", [Elapsed_Time, element(1, Value)])
end,
Parent ! {Status, Msg}.
wait_on_children(0, NumOfSucc, NumOfFail) ->
io:format("Success: ~p~n", [NumOfSucc]),
io:format("Failure: ~p~n", [NumOfFail]);
wait_on_children(Num, NumOfSucc, NumOfFail) ->
receive
{'EXIT', ChildPid, {ErrorCode, _}} ->
io:format("Child ~p crashed ~p~n", [ChildPid, ErrorCode]),
wait_on_children(Num - 1, NumOfSucc, NumOfFail);
{Verdict, Msg} ->
io:format("~s~n", [Msg]),
case Verdict of
ok -> wait_on_children(Num - 1, NumOfSucc + 1, NumOfFail);
error -> wait_on_children(Num - 1, NumOfSucc, NumOfFail + 1)
end
end.
main(Args) ->
inets:start(),
Url = lists:nth(1, Args),
Num = list_to_integer(lists:nth(2, Args)),
Parent = self(),
process_flag(trap_exit, true),
[spawn_link(fun() -> dispatch_request(Url, Parent) end) ||
_ <- lists:seq(1, Num)],
wait_on_children(Num, 0, 0).
I have an Consumer using akka streams where we are consuming packets from kafka. The number of threads are in several thousands and always increasing.
Tried taking thread dump where we found two kinds of threads which have maxiumum count :
Following is the config file
akka.kafka.consumer {
# Tuning property of scheduled polls.
# Controls the interval from one scheduled poll to the next.
poll-interval = 250ms
# Tuning property of the `KafkaConsumer.poll` parameter.
# Note that non-zero value means that the thread that
# is executing the stage will be blocked. See also the `wakup-timeout` setting below.
poll-timeout = 50ms
# The stage will delay stopping the internal actor to allow processing of
# messages already in the stream (required for successful committing).
# Prefer use of `DrainingControl` over a large stop-timeout.
stop-timeout = 30s
# Duration to wait for `KafkaConsumer.close` to finish.
close-timeout = 20s
# If offset commit requests are not completed within this timeout
# the returned Future is completed `CommitTimeoutException`.
# The `Transactional.source` waits this amount of time for the producer to mark messages as not
# being in flight anymore as well as waiting for messages to drain, when rebalance is triggered.
commit-timeout = 15s
# If commits take longer than this time a warning is logged
commit-time-warning = 1s
# Not used anymore (since 1.0-RC1)
# wakeup-timeout = 3s
# Not used anymore (since 1.0-RC1)
# max-wakeups = 10
# If set to a finite duration, the consumer will re-send the last committed offsets periodically
# for all assigned partitions. See https://issues.apache.org/jira/browse/KAFKA-4682.
commit-refresh-interval = infinite
# Not used anymore (since 1.0-RC1)
# wakeup-debug = true
# Fully qualified config path which holds the dispatcher configuration
# to be used by the KafkaConsumerActor. Some blocking may occur.
use-dispatcher = "akka.default-dispatcher"
# Properties defined by org.apache.kafka.clients.consumer.ConsumerConfig
# can be defined in this configuration section.
kafka-clients {
# Disable auto-commit by default
enable.auto.commit = false
}
# Time to wait for pending requests when a partition is closed
wait-close-partition = 500ms
# Limits the query to Kafka for a topic's position
position-timeout = 5s
# When using `AssignmentOffsetsForTimes` subscriptions: timeout for the
# call to Kafka's API
offset-for-times-timeout = 5s
# Timeout for akka.kafka.Metadata requests
# This value is used instead of Kafka's default from `default.api.timeout.ms`
# which is 1 minute.
metadata-request-timeout = 5s
# Interval for checking that transaction was completed before closing the consumer.
# Used in the transactional flow for exactly-once-semantics processing.
eos-draining-check-interval = 30ms
}
#####################################
# Akka Stream Reference Config File #
#####################################
akka {
stream {
# Default materializer settings
materializer {
# Initial size of buffers used in stream elements
initial-input-buffer-size = 4
# Maximum size of buffers used in stream elements
max-input-buffer-size = 8
# Fully qualified config path which holds the dispatcher configuration
# to be used by ActorMaterializer when creating Actors.
# When this value is left empty, the default-dispatcher will be used.
dispatcher = "akka.default-dispatcher"
# Cleanup leaked publishers and subscribers when they are not used within a given
# deadline
subscription-timeout {
# when the subscription timeout is reached one of the following strategies on
# the "stale" publisher:
# cancel - cancel it (via `onError` or subscribing to the publisher and
# `cancel()`ing the subscription right away
# warn - log a warning statement about the stale element (then drop the
# reference to it)
# noop - do nothing (not recommended)
mode = cancel
# time after which a subscriber / publisher is considered stale and eligible
# for cancelation (see `akka.stream.subscription-timeout.mode`)
timeout = 5s
}
# Enable additional troubleshooting logging at DEBUG log level
debug-logging = off
# Maximum number of elements emitted in batch if downstream signals large demand
output-burst-limit = 1000
# Enable automatic fusing of all graphs that are run. For short-lived streams
# this may cause an initial runtime overhead, but most of the time fusing is
# desirable since it reduces the number of Actors that are created.
# Deprecated, since Akka 2.5.0, setting does not have any effect.
auto-fusing = on
# Those stream elements which have explicit buffers (like mapAsync, mapAsyncUnordered,
# buffer, flatMapMerge, Source.actorRef, Source.queue, etc.) will preallocate a fixed
# buffer upon stream materialization if the requested buffer size is less than this
# configuration parameter. The default is very high because failing early is better
# than failing under load.
#
# Buffers sized larger than this will dynamically grow/shrink and consume more memory
# per element than the fixed size buffers.
max-fixed-buffer-size = 1000000000
# Maximum number of sync messages that actor can process for stream to substream communication.
# Parameter allows to interrupt synchronous processing to get upsteam/downstream messages.
# Allows to accelerate message processing that happening withing same actor but keep system responsive.
sync-processing-limit = 1000
debug {
# Enables the fuzzing mode which increases the chance of race conditions
# by aggressively reordering events and making certain operations more
# concurrent than usual.
# This setting is for testing purposes, NEVER enable this in a production
# environment!
# To get the best results, try combining this setting with a throughput
# of 1 on the corresponding dispatchers.
fuzzing-mode = off
}
}
# Fully qualified config path which holds the dispatcher configuration
# to be used by ActorMaterializer when creating Actors for IO operations,
# such as FileSource, FileSink and others.
blocking-io-dispatcher = "akka.stream.default-blocking-io-dispatcher"
default-blocking-io-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 16
}
}
}
# configure overrides to ssl-configuration here (to be used by akka-streams, and akka-http – i.e. when serving https connections)
ssl-config {
protocol = "TLSv1.2"
}
}
ssl-config {
logger = "com.typesafe.sslconfig.akka.util.AkkaLoggerBridge"
}
akka.default-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 5
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 20
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 5
}
Following are the errors with maximum thread count :
one is :
priority:5 - threadId:0x00007f9414008800 - nativeId:0x5575 - nativeId (decimal):21877 - state:WAITING
stackTrace:
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000740597170> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at akka.dispatch.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Locked ownable synchronizers:
Other is :
default-scheduler-1
priority:5 - threadId:0x00007f947805d800 - nativeId:0x638a - nativeId (decimal):25482 - state:TIMED_WAITING
stackTrace:
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at akka.actor.LightArrayRevolverScheduler.waitNanos(LightArrayRevolverScheduler.scala:87)
at akka.actor.LightArrayRevolverScheduler$$anon$3.nextTick(LightArrayRevolverScheduler.scala:271)
at akka.actor.LightArrayRevolverScheduler$$anon$3.run(LightArrayRevolverScheduler.scala:241)
at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
- None
The CPU utilization is going till 800 %. What is it that is causing such a high CPU Utilization. Please advise.
I have a method that gets called via a third party from IO service. My method is supposed to return a boolean. However, I need to post another task to the same IO service, and wait for it to complete before I know the result. How can I return control to the IO loop while I wait for the other task to finish?
(I can add multiple threads, but then there could be multiple calls to my methods, and you'd still end up with a deadlock)
Call graph before:
<thread> io_service third_party my_stuff
| | | |
|---run----->| | |
| |-->some_posted_method-->| |
| | |--callback-->|
| | |<--boolean---|
| |(next task) | |
| | | |
Call graph preferred:
<thread> io_service third_party my_stuff
| | | |
|---run----->| | |
| |-->some_posted_method-->| |
| | |--callback-->|
| |<----some_way_to_return_control-------|
| |(next task) | |
| |--------some_kind_of_resume---------->|
| | |<--boolean---|
| | | |
"third_party" should call "my_stuff" asynchronously, specify a handler that will continue as soon as result is ready, and return control to io_service. "third_party" is a little bit worrying here as it's possible you cannot modify it or it's not desirable.
Another approach would be to use another io_service instance for "my_stuff": "my_stuff" interface would be synchronous but implementation would use io_service in the same or another thread to accomplish its task. Never tried this but I don't see any problem from what I know about Boost.Asio.
Like Tanner Sansbury mentioned, you can call poll_one from your event handler, and it will execute the available handlers.
Mind that you have to call poll_one, since poll is not guaranteed to return if new handlers keep being added. And since poll_one may not have a handler ready to execute yet, you may want to add a sleep to prevent busy waiting. My code ended up like this:
while( !syncDone ) {
boost::system::error_code ec;
int handlersExecuted = ioService.poll_one(ec);
if( ec ) {
//Not sure what error there could be, log it
}
if( 0 == handlersExecuted ) {
std::this_thread::sleep_for(
std::chrono::milliseconds(10)
);
}
}