Set multiple Alarms with using AlarmClock.ACTION_SET_ALARM without using PendingIntent - alarmmanager

I want to add multiple alarms into Alarm Manager (Android --> Clock --> Alarm) that means that my app should not generate self pending alarms. I will only send the alarm information to AlarmClock.
How can I do that?
Clock Alarm app in Android
I want only add/set the start clocks from my app:
Alarm information in my app
I try following:
if (b) //checking if the user turned on the switch
{
for (int i = 0; i < SharedSleepPlanList.size(); i++) {
Intent intent = new Intent(AlarmClock.ACTION_SET_ALARM);
intent.putExtra(AlarmClock.EXTRA_HOUR, ivarHour);
intent.putExtra(AlarmClock.EXTRA_MINUTES, ivarMinute);
intent.putExtra(AlarmClock.EXTRA_MESSAGE, sAlarmInfo[0]);
if (intent.resolveActivity(getPackageManager()) != null) {
startActivity(intent);
} else
Toast.makeText(Activity_drawerItemSleepPlan.this, "No support", Toast.LENGTH_SHORT).show();
}
Toast.makeText(Activity_drawerItemSleepPlan.this, "Swtich off", Toast.LENGTH_SHORT).show();
}
I tried intentarrays but it doesnt worked. I can only send one alarm to Clock-->Alarm.
Result after send alarm info in loop
I'am new in android.

Related

DeadlineExceededException when creating tasks on startup

I have a Spring Boot 2.4.5 application deployed on Google Cloud Run (image created with Jib). On startup I want to create a Cloud Task but I get a DeadlineExceededException.
If I run the task creation code but triggered by an HTTP request, the task is created. And the task that was supposed to be created on startup is also created. It's like something is missing at the startup that prevents task to be created.
The startup event
#EventListener(ApplicationReadyEvent.class)
public void doSomethingAfterStartup() {
LOGGER.info("ApplicationReadyEvent");
String message = "GCP New Instance Start " + Instant.now();
cloudTasksService.createTask("xxxx", "us-central1", "xxxx", message, 60);
}
The task creation code
public void createTask(String projectId, String locationId, String queueId, String message, Integer delay) throws IOException {
try (CloudTasksClient client = CloudTasksClient.create()) {
LOGGER.info("Client created");
String url = "xxxxxxxxx";
String payload = String.format("{ \"text\": \"%s\"}", message);
String queuePath = QueueName.of(projectId, locationId, queueId).toString();
Instant eta = Instant.now().plusSeconds(delay);
Task.Builder taskBuilder =
Task.newBuilder()
.setScheduleTime(Timestamp.newBuilder().setSeconds(eta.getEpochSecond()).build())
.setHttpRequest(
HttpRequest.newBuilder()
.setBody(ByteString.copyFrom(payload, Charset.defaultCharset()))
.setUrl(url)
.setHttpMethod(HttpMethod.POST)
.build());
LOGGER.info("TaskBuilder ready");
Task task = client.createTask(queuePath, taskBuilder.build());
LOGGER.info("Task created: {}", task.getName());
}
}
The HTTP endpoint
#GetMapping("/tasks")
public ResponseEntity<Void> task(#RequestParam Integer delay) throws IOException {
cloudTasksService.createTask("xxxx", "us-central1", "xxxx", "using HTTP request", delay);
return ResponseEntity.accepted().build();
}
The exception
com.google.api.gax.rpc.DeadlineExceededException: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: Deadline exceeded after 5.200272920s.
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:51)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1074)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
at com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:724)
at com.google.common.util.concurrent.ForwardingListenableFuture.addListener(ForwardingListenableFuture.java:45)
at com.google.api.core.ApiFutureToListenableFuture.addListener(ApiFutureToListenableFuture.java:52)
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1047)
at com.google.api.core.ApiFutures.addCallback(ApiFutures.java:63)
at com.google.api.gax.grpc.GrpcExceptionCallable.futureCall(GrpcExceptionCallable.java:67)
at com.google.api.gax.rpc.UnaryCallable$1.futureCall(UnaryCallable.java:126)
at com.google.api.gax.tracing.TracedUnaryCallable.futureCall(TracedUnaryCallable.java:75)
at com.google.api.gax.rpc.UnaryCallable$1.futureCall(UnaryCallable.java:126)
at com.google.api.gax.rpc.UnaryCallable.futureCall(UnaryCallable.java:87)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.tasks.v2.CloudTasksClient.createTask(CloudTasksClient.java:1915)
at com.google.cloud.tasks.v2.CloudTasksClient.createTask(CloudTasksClient.java:1885)
at com.sps.playground.CloudTasksService.createTask(CloudTasksService.java:55)
It looks like the workers are not ready when the task is being queued. I'd recommend creating tasks on startup as this can happen often due to workers not passing the readiness check when the task is being processed, and then it fails. That wold also explain why the task runs normally when triggered by HTTP.
You can tackle this by decreasing the time of your startup following the recommendations. Also, as you are using Java with Springboot, it may be worth checking Reducing startup tasks recommendations as well.

GCP Cloud Tasks: shorten period for creating a previously created named task

We are developing a GCP Cloud Task based queue process that sends a status email whenever a particular Firestore doc write-trigger fires. The reason we use Cloud Tasks is so a delay can be created (using scheduledTime property 2-min in the future) before the email is sent, and to control dedup (by using a task-name formatted as: [firestore-collection-name]-[doc-id]) since the 'write' trigger on the Firestore doc can be fired several times as the document is being created and then quickly updated by backend cloud functions.
Once the task's delay period has been reached, the cloud-task runs, and the email is sent with updated Firestore document info included. After which the task is deleted from the queue and all is good.
Except:
If the user updates the Firestore doc (say 20 or 30 min later) we want to resend the status email but are unable to create the task using the same task-name. We get the following error:
409 The task cannot be created because a task with this name existed too recently. For more information about task de-duplication see https://cloud.google.com/tasks/docs/reference/rest/v2/projects.locations.queues.tasks/create#body.request_body.FIELDS.task.
This was unexpected as the queue is empty at this point as the last task completed succesfully. The documentation referenced in the error message says:
If the task's queue was created using Cloud Tasks, then another task
with the same name can't be created for ~1hour after the original task
was deleted or executed.
Question: is there some way in which this restriction can be by-passed by lowering the amount of time, or even removing the restriction all together?
The short answer is No. As you've already pointed, the docs are very clear regarding this behavior and you should wait 1 hour to create a task with same name as one that was previously created. The API or Client Libraries does not allow to decrease this time.
Having said that, I would suggest that instead of using the same Task ID, use different ones for the task and add an identifier in the body of the request. For example, using Python:
from google.cloud import tasks_v2
from google.protobuf import timestamp_pb2
import datetime
def create_task(project, queue, location, payload=None, in_seconds=None):
client = tasks_v2.CloudTasksClient()
parent = client.queue_path(project, location, queue)
task = {
'app_engine_http_request': {
'http_method': 'POST',
'relative_uri': '/task/'+queue
}
}
if payload is not None:
converted_payload = payload.encode()
task['app_engine_http_request']['body'] = converted_payload
if in_seconds is not None:
d = datetime.datetime.utcnow() + datetime.timedelta(seconds=in_seconds)
timestamp = timestamp_pb2.Timestamp()
timestamp.FromDatetime(d)
task['schedule_time'] = timestamp
response = client.create_task(parent, task)
print('Created task {}'.format(response.name))
print(response)
#You can change DOCUMENT_ID with USER_ID or something to identify the task
create_task(PROJECT_ID, QUEUE, REGION, DOCUMENT_ID)
Facing a similar problem of requiring to debounce multiple instances of Firestore write-trigger functions, we worked around the default Cloud Tasks task-name based dedup mechanism (still a constraint in Nov 2022) by building a small debounce "helper" using Firestore transactions.
We're using a helper collection _syncHelper_ to implement a delayed throttle for side effects of write-trigger fires - in the OP's case, send 1 email for all writes within 2 minutes.
In our case we are using Firebease Functions task queue utils and not directly interacting with Cloud Tasks but thats immaterial to the solution. The key is to determine the task's execution time in advance and use that as the "dedup key":
async function enqueueTask(shopId) {
const queueName = 'doSomething';
const now = new Date();
const next = new Date(now.getTime() + 2 * 60 * 1000);
try {
const shouldEnqueue = await getFirestore().runTransaction(async t=>{
const syncRef = getFirestore().collection('_syncHelper_').doc(<collection_id-doc_id>);
const doc = await t.get(syncRef);
let data = doc.data();
if (data?.timestamp.toDate()> now) {
return false;
}
await t.set(syncRef, { timestamp: Timestamp.fromDate(next) });
return true;
});
if (shouldEnqueue) {
let queue = getFunctions().taskQueue(queueName);
await queue.enqueue({
timestamp: next.toISOString(),
},
{ scheduleTime: next }); }
} catch {
...
}
}
This will ensure a new task is enqueued only if the "next execution" time has passed.
The execution operation (also a cloud function in our case) will remove the sync data entry if it hasn't been changed since it was executed:
exports.doSomething = functions.tasks.taskQueue({
retryConfig: {
maxAttempts: 2,
minBackoffSeconds: 60,
},
rateLimits: {
maxConcurrentDispatches: 2,
}
}).onDispatch(async data => {
let { timestamp } = data;
await sendYourEmailHere();
await getFirestore().runTransaction(async t => {
const syncRef = getFirestore().collection('_syncHelper_').doc(<collection_id-doc_id>);
const doc = await t.get(syncRef);
let data = doc.data();
if (data?.timestamp.toDate() <= new Date(timestamp)) {
await t.delete(syncRef);
}
});
});
This isn't a bullet proof solution (if the doSomething() execution function has high latency for example) but good enough for 99% of our use cases.

Not able to solve throttlingException in DynamoDB

I have a lambda function which does a transaction in DynamoDB similar to this.
try {
const reservationId = genId();
await transactionFn();
return {
statusCode: 200,
body: JSON.stringify({id: reservationId})
};
async function transactionFn() {
try {
await docClient.transactWrite({
TransactItems: [
{
Put: {
TableName: ReservationTable,
Item: {
reservationId,
userId,
retryCount: Number(retryCount),
}
}
},
{
Update: {
TableName: EventDetailsTable,
Key: {eventId},
ConditionExpression: 'available >= :minValue',
UpdateExpression: `set available = available - :val, attendees= attendees + :val, lastUpdatedDate = :updatedAt`,
ExpressionAttributeValues: {
":val": 1,
":updatedAt": currentTime,
":minValue": 1
}
}
}
]
}).promise();
return true
} catch (e) {
const transactionConflictError = e.message.search("TransactionConflict") !== -1;
// const throttlingException = e.code === 'ThrottlingException';
console.log("transactionFn:transactionConflictError:", transactionConflictError);
if (transactionConflictError) {
retryCount += 1;
await transactionFn();
return;
}
// if(throttlingException){
//
// }
console.log("transactionFn:e.code:", JSON.stringify(e));
throw e
}
}
It just updating 2 tables on api call. If it encounter a transaction conflict error, it simply retry the transaction by recursively calling the function.
The eventDetails table is getting too much db updates. ( checked it with aws Contributor Insights) so, made provisioned unit to a higher value than earlier.
For reservationTable Provisioned capacity is on Demand.
When I do load test over this api with 400 (or more) users using JMeter (master slave configuration) I am getting Throttled error for some api calls and some api took more than 20 sec to respond.
When I checked X-Ray for this api found that, DynamoDB is taking too much time for this transasction for the slower api calls.
Even with much fixed provisioning ( I tried on demand scaling too ) , I am getting throttled exception for api calls.
ProvisionedThroughputExceededException: The level of configured provisioned throughput for the table was exceeded.
Consider increasing your provisioning level with the UpdateTable API.
UPDATE
And one more thing. When I do the load testing, I am always uses the same eventId. It means, I am always updating the same row for all the api requests. I have found this article, which says that, a single partition can only have upto 1000 WCU. Since I am always updating the same row in the eventDetails table during load testing, is that causing this issue ?
I had this exact error and it helped me to change the On Demand to Provisioned under Read/write capacity mode. Try to change that, if that doesn't help, we'll go from there.
From the link you cite in your update, also described in an AWS help article here, it sounds like the issue is that all of your load testers are writing to the same entry in the table, which is going to be in the same partition, subject to the hard limit of 1,000 WCU.
Have you tried repeating this experiment with the load testers writing to different partitions?

How to connect the agent in the Amazon Connect in outbound call

I have a simple contact flow like below from which I trigger the call from Amazon Connect (claimed phone number in AWS Connect) to the end customer (real customer phone number):
Now I want to connect an agent in the Amazon Connect end.
When I trigger the following code, I need to trigger the call from the Amazon Connect (Customer Agent) to the end customer (Real customer phone number)
const AWS = require('aws-sdk');
AWS.config.update({ region: 'us-east-1' });
exports.handler = (event, context, callback) => {
let connect = new AWS.Connect();
const customerName = event.name;
const customerPhoneNumber = event.number;
const dayOfWeek = event.day;
let params = {
"InstanceId" : '12345l-abcd-1234-abcde-123456789bcde',
"ContactFlowId" : '987654-lkjhgf-9875-abcde-poiuyt0987645',
"SourcePhoneNumber" : '+1123456789',
"DestinationPhoneNumber" : customerPhoneNumber,
"Attributes" : {
'name' : customerName,
'dayOfWeek' : dayOfWeek
}
}
connect.startOutboundVoiceContact(
params, function (error, response){
if(error) {
console.log(error)
callback("Error", null);
} else
{
console.log('Initiated an outbound call with Contact Id ' + JSON.stringify(response.ContactId));
callback(null, 'Success');
}
}
);
};
How to add the customer agent in the contact flow?
Logging is not working (Not able to find any logs in CloudWatch AWS)
Is my call recording added in the right section in contact flow?
To connect the call to an agent, you need to add a “set working queue” block to set the call to route to a queue where you have available agents. After you set your queue, replace the “disconnect / hang up” block with a “transfer to queue” block. This will route the call to an available agent or queue the call if no agent is immediately available.
Recording will only occur for the portion of the call between the agent and the outside party, so you won’t see any recordings for calls that didn’t get connected to an agent. Since you have the “set recording behavior” block set to “customer and agent” in your flow already, you should get a recording file when the call gets connected to an agent with the steps above.

ClusterReceptionistExtension doesn't register Subscriber

I am trying to use akka pub-sub with in our application. I have a play application which is part of akka cluster. I want to use akka cluster-client to make make this application listen/subscribe to topics and messages will be published from other applications.
Cluster/Subscriber side code [within Play application]
class MyRealtimeActor extends Actor {
import DistributedPubSubMediator.{ Subscribe, SubscribeAck }
def receive = {
case SubscribeAck(Subscribe("metrics", _)) => {
Logger.info("SUBSCRIBED TO MESSAGES")
context become ready
}
}
def ready: Actor.Receive = {
case m => {
Logger.info("RECEIVED MESSAGE " + m)
}
}
}
and I instantiate like this in Global
val cluster: ActorSystem = ActorSystem("ClusterSystem")
val metricsActor = Global.cluster.actorOf(Props(new MyRealtimeActor), "metricsActor")
ClusterReceptionistExtension(cluster).registerSubscriber("metrics", metricsActor)
and the conf file has the following
akka {
actor {
provider = "akka.cluster.ClusterActorRefProvider"
extensions = ["akka.contrib.pattern.DistributedPubSubExtension",
"akka.contrib.pattern.ClusterReceptionistExtension"]
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 2551
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2551"
]
auto-down-unreachable-after = 10s
}
When is start the play application i can see the following log
[INFO] [11/06/2013 17:48:42.926] [ClusterSystem-akka.actor.default-dispatcher-3] [Cluster(akka://ClusterSystem)] Cluster Node [akka.tcp://ClusterSystem#127.0.0.1:2551] - Node [akka.tcp://ClusterSystem#127.0.0.1:2551] is JOINING, roles []
[INFO] [11/06/2013 17:48:42.942] [ClusterSystem-akka.actor.default-dispatcher-5] [akka://ClusterSystem/deadLetters] Message [akka.contrib.pattern.DistributedPubSubMediator$SubscribeAck] from Actor[akka://ClusterSystem/user/distributedPubSubMediator#1608017981] to Actor[akka://ClusterSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
Would like to know why the actor is not properly subscribed ? I am expecting it to print SUBSCRIBED TO MESSAGES
The thing is that the SubscribeAck is sent to the sender of the Subscribe message and not the actor in the Subscribe message. To get the SubscribeAck sent to the metricsActor, it would have to send the Subscribe itself, and directly to the mediator.
The receptionist is used by the cluster client code, and you shouldn’t use that to subscribe your actors normally.