Looking for some help regarding outbound calling.
Using the article https://aws.amazon.com/blogs/contact-center/identify-and-move-unwelcomed-calls-on-your-amazon-connect-instance/
I have created a function that will block/play message to customers if number exists in a sql table for inbound calls.
My main goal however is outbound calling - blocking UK-CTPS or US-DoNotCall database.
I want to check the number dialed against the DB before the call is connected - then proceed if number does not exist, or terminate the call if number does exist.
There appears to be little documentation regarding outbound calling flows.
I can set up the flow to check the number after the call has connected, but need it to work prior to the call, obviously.
Thanks
When an outbound contact is initiated in Amazon Connect, the dial request is processed immediately and then connected to a contact flow after the call is setup. This means that there is no opportunity to defeat the dialing request after the dialing client has sent the request. If you need to process logic to deny the dialing request, it would need to be done in the client prior to the request being sent to then Amazon Connect APIs.
There are 2 APIs that allow contacts to be created/initiated; the one that is used by web-based interfaces (like the Amazon Connect Contact Control Panel) which agents use, and the Outbound API that is part of the AWS SDK (which is meant for automated dialing applications). If your use case is preventing agents from dialing numbers on Do Not Call lists, then you can use the Streams API to create a custom dialing interface for the agents and only allow the dialing request to be sent after you check your Do Not Call blacklist.
You could use Amazon API Gateway to expose an HTTP interface to your Lambda code using the Lambda Proxy (see documentation here). When an agent clicks the dial button in your custom interface, you can call the API Gateway method to check the number against your DNC list. If the number is not found in the DNC list, then you would process the dialing request with the agent.connect() function of the Streams API (example below).
agent.connect(Endpoint.byPhoneNumber("5558675309"), {
success: function() { ... },
failure: function() { ... }
});
Related
This is a system design question about creating a messaging application (like WhatsApp or FB messenger).
I saw a video that had the users connected via websocket to the API Gateway , then API GW pushed the message onto SQS - which was then polled by the EC2 compute layer to process the message (store it in the db) and hopefully send the message back to recipient.
How can the backend ec2 / compute layer send the message to the recipient (Bob) ? Can it just call a route on the API Gateway and it would know the connection details of the recipient and where to send? Would there need to be an additional caching layer to store info about the connection details of every user? I'm newer to AWS so not sure how this is accomplished and whether you can call API GW to send back to a user.
Also if you know how group chat would work please share.
As mentioned in the documentation, your backend EC2 servers can send messages to the connected clients directly via the #connections API. This documentation page walks you through how to do that.
For this, you'll need to add the connectionId to the header. See this answer on how to do so.
I am building a chat application similar to Telegram. Let's say 1 million members are in a group and if someone sends a message in that group, everyone should get the message instantly.
In my project I am using dynamodb for storing user & message details and AWS Lambda for querying data from dynamodb and API Gateway for websocket connection. Is there any possibility that lambda function will be closed before the query & sending the message to all the connected users? If so, what will be the alternative approach?
Flow Explanation:
Once the lambda function gets a group message from client/frontend, It will store the data in dynamodb.
After that I will read all the websocket connected users from another table. (eg. connections)
Finally I send the message to all the users one by one through the websocket.
I've deployed my project in us-east-1 region.
If my approach is not a scalable or recommended way of creating a group chat application. Please suggest any other approaches.
Thanks,
Is it possible with Websocket AWS API to register custom information to be sent when disconnect event is triggered by the client?
The connection can be closed by the server or by the client. As the
connection is already closed when it is executed, $disconnect is a
best-effort event. API Gateway will try its best to deliver the
$disconnect event to your integration, but it cannot guarantee
delivery.
I understand that $disconnect is a best-effort event and could not reach to the integration, but for most cases it probably will so I would like to be able to send some information to the lambda about the user or device that disconnects and not to make another database call to get userId/deviceId for a given connectionID.
I have checked the request context of disconnect event (which contains metadata such as sourceIp) but none of the information from there I find reliable to do this mapping.
I have a service listening on 'https://myapp.a.run.app/dosomething', but I want to leverage the scalability features of Cloud Run, so in the controller for 'dosomething', I send off 10 requests to 'https://myapp.a.run.app/smalltask'; with my app configured to allow servicing of only one request per instance, I expect 10 instances to spin up, all do their smalltask, and return (all within the timeout period).
But I don't know how to properly authenticate the request, so those 10 requests all result in 403's. For Cloud Run services, I manually pass in a bearer token with the initial request, though I expect to add some api proxy at some point. But without said API proxy, what's the right way to send the request such that it is accepted? The app is running as a user that does have permissions to access the endpoint.
Authenticating service-to-service
If your architecture is using multiple services, these services will likely need to communicate with each other.
You can use synchronous or asynchronous service-to-service communication:
For asynchronous communication, use
Cloud Tasks for one to one asynchronous communication
Pub/Sub for one to many asynchronous communication
Cloud Scheduler for regularly scheduled asynchronous communication.
Cloud Workflows for orchestration services.
For synchronous communication
One service invokes another one over HTTP using its endpoint URL. In this use case, it's a good idea to ensure that each service is only able to make requests to specific services. For instance, if you have a login service, it should be able to access the user-profiles service, but it probably shouldn't be able to access the search service.
First, you'll need to configure the receiving service to accept requests from the calling service:
Grant the Cloud Run Invoker (roles/run.invoker) role to the calling service identity on the receiving service. By default, this identity is PROJECT_NUMBER-compute#developer.gserviceaccount.com.
In the calling service, you'll need to:
Create a Google-signed OAuth ID token with the audience (aud) set to the URL of the receiving service. This value must contain the schema prefix (http:// or https://) and custom domains are currently not supported for the aud value.
Include the ID token in an Authorization: Bearer ID_TOKEN header. You can get this token from the metadata server, while the container is running on Cloud Run (fully managed). If the application is running outside Google Cloud, you can generate an ID token from a service account key file.
For a full guide and examples in Node/Python/Go/Java and others see: Authenticating service-to-service
I have built an EC2 reverse proxy (Nginx) that communicates with an external API endpoint over the internet. I have a Route53 DNS with an A record linking to my EC2. There are a few endpoints (Nginx locations) and depending on which url you hit, you are redirected to a specific proxy location, and forwarded to the right endpoint on the external API. It all works great.
Now i want to create some type of job that will test this process periodically to ensure that it's running and notify me if it's not. AWS has so many tools and i think i need to use Lambda and API Gateway.
I'd like to hit my url(Route53 DNS) go thru the EC2 and receive a response from the endpoint server. My site does this, postman can, but i can't figure out how to accomplish this in an automated way and alert me based on the response values.
how can i test my full pathway (www.example.com/option -> nginxEC2 path('/option') -> www.endpoint.com/option) and be notified based on the results.
EDIT: i need to be able to send a body with this. if i send it without body the server returns 404, if i can send with a body/payload, i'll get a response.
EDIT: basically looking for a way to hit my DNS, which thru A record, routes to my reverse proxy, to an endpoint. i just need to do an HTTP request to the Domain, and get and answer back and know the status code.
Mark B's solution is the closest as the free site he sent me has an option to pay for this service. gonna leave it open a few more days.
You definitely don't need API Gateway for this. That wouldn't help you test this at all. API Gateway would just give you an entirely new API that you would need to test.
You could use Lambda for this as you mentioned. You would write a Lambda function that hits the URLs you want to test, checks the results, and sends you a message over SES or SNS or some other means when it fails. The Lambda function could be configured to automatically run on a schedule.
However, AWS already has a service that does exactly what you are looking for: Route53 Health Checks.
What you are describing is called an HTTP health check or HTTP uptime monitor. There are tons of services that provide this feature, some of them free.
It looks like the word that you're looking for is trace -- you want to trace requests along your application. AWS offer for that is X-Ray. As you see in their official documentation, you need to use their SDK to instrument your application, which talks to a deamon in your EC2 instance. You can then integrate with CloudWatch and SNS to be notified upon errors (e.g. 4xx codes): https://aws.amazon.com/blogs/devops/using-amazon-cloudwatch-and-amazon-sns-to-notify-when-aws-x-ray-detects-elevated-levels-of-latency-errors-and-faults-in-your-application/
Hope it helps!