I'm calling the Facebook Graph API to get the email, facebook ID and name of a user that logs into my app through Facebook.
I successfully get the information; I'm now trying to use dispatch groups so the function that call graph waits until the graph API call completes before returning. The graph request is asynchronous.
I can't figure out why this code is locking up.
1) Create a dispatch group
2) Enter said display group
3) Leave the group once info is retrieved or an error is found
4) Wait for the group leave before returning
It seems like my dispatch group enter is not called correctly but I can't figure out why.
class func getFBInformation()->Bool {
var fbResult = false
let fbGraphGroup = DispatchGroup()
fbGraphGroup.enter()
FBSDKGraphRequest(graphPath: "/me", parameters: ["fields": "id, name, email"]).start { (connection, result, err) in
if err != nil {
fbResult = false
print("Pre Error Signal")
fbGraphGroup.leave()
return
}
if let resultDict = result as? [String:AnyObject] {
<Do things with graph results>
print("Pre success signal")
fbResult = true
fbGraphGroup.leave()
}
}
fbGraphGroup.wait()
print("Post signal")
return fbResult
}
How could it work?
First you enter the group, next you are waiting on main thread for completion handler until it leave the group. But the completion handler is not able to execute on the main thread to be able to leave the group.
As far as I know, your completion handler is dispatched on the main queue by the API. It is not necessary to use any other kind of synchronization.
Related
I encountered the exception.
"System.IO.IOException: Unable to read data from the transport connection: Connection reset by peer.\n ---> System.Net.Sockets.SocketException (104): Connection reset by peer\n --- End of inner exception stack trace ---\n at Google.Cloud.PubSub.V1.SubscriberClientImpl.SingleChannel.HandleRpcFailure(Exception e)\n at Google.Cloud.PubSub.V1.SubscriberClientImpl.SingleChannel.HandlePullMoveNext(Task initTask)\n at Google.Cloud.PubSub.V1.SubscriberClientImpl.SingleChannel.StartAsync()\n at Google.Cloud.PubSub.V1.Tasks.ForwardingAwaiter.GetResult()\n at Google.Cloud.PubSub.V1.Tasks.Extensions.<>c__DisplayClass4_0.<g__Inner|0>d.MoveNext()\n--- End of stack trace from previous location ---\n
"Invoke" function was executed to pull a message from my topic per 5 seconds in the scheduler
public async Task Invoke()
{
var subscriber = await SubscriberClient.CreateAsync(CreateSubscriptionName());
await subscriber.StartAsync((msg, cancellationToken) =>
{
//....
return Task.FromResult(SubscriberClient.Reply.Ack);
});
await subscriber.StopAsync(CancellationToken.None);
}
How did I fix this ?
Thanks!
I've already checked the doc
PublisherClient and SubscriberClient are expensive to create, so when regularly publishing or subscribing to the same topic or subscription then a singleton client instance should be created and used for the lifetime of the application.
But I still don't know how to do ...
I guessed I left too many open connections ?
I'm new to the WTelegramClient C# Library and was used to TLSharp (not working anymore)
I'm trying to understand how I get User info after update is received,
I have the example code that listen to updates and write them in console
but I can't understand how I can respond to the user that sent the message (new update)
I think I need the user id/access_hash to send message to the sender but I can't understand how
Here is how I get the new messages but it can get only username or name/id
private static void DisplayMessage(MessageBase messageBase, bool edit = false)
{
if (edit) Console.Write("(Edit): ");
switch (messageBase)
{
case Message m: Console.WriteLine($"{Peer(m.from_id) ?? m.post_author} in {Peer(m.peer_id)}> {m.message}"); break;
case MessageService ms: Console.WriteLine($"{Peer(ms.from_id)} in {Peer(ms.peer_id)} [{ms.action.GetType().Name[13..]}]"); break;
}
}
Here i can get the name or username of sender(if have) and the message itself
MessageService ('user' not channel or group) for example get me only firstname and lastname
How to get all info of sender or chat itself (i want to try mark as read the message)
I'm used to TLSharp and the new library WTelegramClient is different.
Thanks!!!
Below is a quick example on how to modify DisplayMessage to react to a message sent in private from a user, get the details about this user, verify who it is and which text was sent to us, and then send him a message back.
Notes:
For this example to work, you will need the latest version of Program_ListenUpdates.cs with static variables
DisplayMessage is now async Task, in order to use await
You can pass user to send a message because class User is implicitly converted to InputPeerUser (with the user id/access_hash).
You can do similarly for messages coming from chats, using PeerChat/PeerChannel classes and the _chats dictionary to get chat details
private static async Task DisplayMessage(MessageBase messageBase, bool edit = false)
{
if (edit) Console.Write("(Edit): ");
switch (messageBase)
{
case Message m:
Console.WriteLine($"{Peer(m.from_id) ?? m.post_author} in {Peer(m.peer_id)}> {m.message}");
if (m.flags.HasFlag(Message.Flags.out_))
break; // ignore our own outgoing messages
if (m.Peer is PeerUser pu) // got a message in a direct chat with a user
{
if (_users.TryGetValue(pu.user_id, out var user)) // get user details
{
if (user.username == "Wiz0u" && m.message == "hello")
{
await Client.SendMessageAsync(user, $"hi {user.first_name}, I'm {My.first_name}");
}
}
}
break;
case MessageService ms:
Console.WriteLine($"{Peer(ms.from_id)} in {Peer(ms.peer_id)} [{ms.action.GetType().Name[13..]}]");
break;
}
}
Hi I am facing an issue when triggering the tapkey lock it scans for the lock and successfully scans it. The when I trigger unlcok againsat the PhysicalLockId I get the lock blinks red and I get message Unauthorized.
Using Token Exchange Mechanism, generated Identity provider against Oauth Client.
The lock is assigned to the user as unrestricted
iOS trigger Lock Function
private func triggerLock(physicalLockId: String) -> TKMPromise<Bool> {
guard let bluetoothAddress = self.bleLockScanner.getLock(
physicalLockId: physicalLockId)?.bluetoothAddress else {
self.showAlert(title: "Alert", message: "Lock not nearby", okTitle: R.string.localizable.commonCancel(), cancelString: R.string.localizable.commonScanAgain(), cancelHandle: { _ in
self.scanLock()
})
return TKMAsync.promiseFromResult(false)
}
let ct = TKMCancellationTokens.fromTimeout(timeoutMs: 15000)
// Use the BLE lock communicator to send a command to the lock
return self.bleLockCommunicator.executeCommandAsync(
bluetoothAddress: bluetoothAddress,
physicalLockId: physicalLockId,
commandFunc: { tlcpConnection -> TKMPromise<TKMCommandResult> in
let triggerLockCommand = TKMDefaultTriggerLockCommandBuilder()
.build()
// Pass the TLCP connection to the command execution facade
return self.commandExecutionFacade!.executeStandardCommandAsync(tlcpConnection, command: triggerLockCommand, cancellationToken: ct)
},
cancellationToken: ct)
// Process the command's result
.continueOnUi({ commandResult in
let code: TKMCommandResult.TKMCommandResultCode = commandResult?.code ??
TKMCommandResult.TKMCommandResultCode.technicalError
switch code {
case TKMCommandResult.TKMCommandResultCode.ok:
return true
default:
return false
}
})
.catchOnUi({ (_: TKMAsyncError) -> Bool in
NSLog("Trigger lock failed")
self.showAlert(title: "Alert", message: "Trigger lock failed")
return false
})
}
The error Unautorized means usually, that the current user don't have a grant to this specific lock. In other cases there would be a different error code.
As you are using your own app and so an own identity provider, you have also to create a contact for this specific users.
https://developers.tapkey.io/openapi/tapkey_management_api_v1/#/Contacts/Contacts_Put
If you are creating a contact, you have to specify the id of your identity provider as ipId, otherwise it will create a contact for a tapkey user.
As I can see in your account, you successfully created a user for your identity provider, but then created a contact for an tapkey user.
I have a lambda function which does a series of actions. I have a react application which triggers the lambda function.
Is there a way I can send a partial response from the lambda function after each action is complete.
const testFunction = (event, context, callback) => {
let partialResponse1 = await action1(event);
// send partial response to client
let partialResponse2 = await action2(partialResponse1);
// send partial response to client
let partialResponse3 = await action3(partialResponse2);
// send partial response to client
let response = await action4(partialResponse3);
// send final response
}
Is this possible in lambda functions? If so, how we can do this. Any ref docs or sample code would be do a great help.
Thanks.
Note: This is fairly a simple case of showing a loader with % on the client-side. I don't want to overcomplicate things SQS or step functions.
I am still looking for an answer for this.
From what I understand you're using API Gateway + Lambda and are looking to show the progress of the Lambda via UI.
Since each step must finish before the next step begin I see no reason not to call the lambda 4 times, or split the lambda to 4 separate lambdas.
E.g.:
// Not real syntax!
try {
res1 = await ajax.post(/process, {stage: 1, data: ... });
out(stage 1 complete);
res2 = await ajax.post(/process, {stage: 2, data: res1});
out(stage 2 complete);
res3 = await ajax.post(/process, {stage: 3, data: res2});
out(stage 3 complete);
res4 = await ajax.post(/process, {stage: 4, data: res3});
out(stage 4 complete);
out(process finished);
catch(err) {
out(stage {$err.stage-number} failed to complete);
}
If you still want all 4 calls to be executed during the same lambda execution you may do the following (this especially true if the process is expected to be very long) (and because it's usually not good practice to execute "long hanging" http transaction).
You may implement it by saving the "progress" in a database, and when the process is complete save the results to the database as well.
All you need to do is query the status every X seconds.
// Not real syntax
Gateway-API --> lambda1 - startProcess(): returns ID {
uuid = randomUUID();
write to dynamoDB { status: starting }.
send sqs-message-to-start-process(data, uuid);
return response { uuid: uuid };
}
SQS --> lambda2 - execute(): returns void {
try {
let partialResponse1 = await action1(event);
write to dynamoDB { status: action 1 complete }.
// send partial response to client
let partialResponse2 = await action2(partialResponse1);
write to dynamoDB { status: action 2 complete }.
// send partial response to client
let partialResponse3 = await action3(partialResponse2);
write to dynamoDB { status: action 3 complete }.
// send partial response to client
let response = await action4(partialResponse3);
write to dynamoDB { status: action 4 complete, response: response }.
} catch(err) {
write to dynamoDB { status: failed, error: err }.
}
}
Gateway-API --> lambda3 -> getStatus(uuid): returns status {
return status from dynamoDB (uuid);
}
Your UI Code:
res = ajax.get(/startProcess);
uuid = res.uuid;
in interval every X (e.g. 3) seconds:
status = ajax.get(/getStatus?uuid=uuid);
show(status);
if (status.error) {
handle(status.error) and break;
}
if (status.response) {
handle(status.response) and break;
}
}
Just remember that lambda's cannot exceed 15 minutes execution. Therefore, you need to be 100% certain that whatever the process does, it never exceeds this hard limit.
What you are looking for is to have response expose as a stream where you can write to the stream and flush it
Unfortunately its not there in Node.js
How to stream AWS Lambda response in node?
https://docs.aws.amazon.com/lambda/latest/dg/programming-model.html
But you can still do the streaming if you use Java
https://docs.aws.amazon.com/lambda/latest/dg/java-handler-io-type-stream.html
package example;
import java.io.InputStream;
import java.io.OutputStream;
import com.amazonaws.services.lambda.runtime.RequestStreamHandler;
import com.amazonaws.services.lambda.runtime.Context;
public class Hello implements RequestStreamHandler{
public void handler(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
int letter;
while((letter = inputStream.read()) != -1)
{
outputStream.write(Character.toUpperCase(letter));
}
}
}
Aman,
You can push the partial outputs into SQS and read the SQS messages to process those message. This is a simple and scalable architecture. AWS provides SQS SDKs in different languages, for example, JavaScript, Java, Python, etc.
Reading and writing into SQS is very easy using SDK and that too can be implemented in serverside or in your UI layer (with proper IAM).
I found AWS step function may be what you need:
AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly.
Check this link for more detail:
In our example, you are a developer who has been asked to create a serverless application to automate handling of support tickets in a call center. While you could have one Lambda function call the other, you worry that managing all of those connections will become challenging as the call center application becomes more sophisticated. Plus, any change in the flow of the application will require changes in multiple places, and you could end up writing the same code over and over again.
i'm using Akka on one of my projects and i need to get the state of an actor, the way i'm doing it is as follows.
a REST request comes in
#GET
#Produces(Array(MediaType.APPLICATION_JSON))
def get() = {
try {
Await.result((getScanningActor ? WorkInfo), 5.second).asInstanceOf[ScanRequest]
}
catch{
case ex: TimeoutException => {
RequestTimedOut()
}
}
}
on the actor i respond with the current work state
case WorkInfo => sender ! currentWork
for some reason the first time i call this function i get the correct value, on the following requests i get the same value i received on the first call
I'm also using DCEVM if that makes any difference.