I know the question has been asked before and I agree with most answers that claim it is better to follow the way requests are made async with URLSession in Swift 3. I haver the following scenario, where async request cannot be used.
With Swift 3 and the ability to run swift on servers I have the following problem.
Server Receives a request from a client
To process the request the server has to send a url request and wait for the response to arrive.
Once response arrives, process it and reply to the client
The problem arrises in step 2, where URLSession gives us the ability to initiate an async data task only. Most (if not all) server side swift web frameworks do not support async responses. When a request arrives to the server everything has to be done in a synchronous matter and at the end send the response.
The only solution I have found so far is using DispatchSemaphore (see example at the end) and I am not sure whether that will work in a scaled environment.
Any help or thoughts would be appreciated.
extension URLSession {
func synchronousDataTaskWithURL(_ url: URL) -> (Data?, URLResponse?, Error?) {
var data: Data?
var response: URLResponse?
var error: Error?
let sem = DispatchSemaphore(value: 0)
let task = self.dataTask(with: url as URL, completionHandler: {
data = $0
response = $1
error = $2 as Error?
sem.signal()
})
task.resume()
let result = sem.wait(timeout: DispatchTime.distantFuture)
switch result {
case .success:
return (data, response, error)
case .timedOut:
let error = URLSessionError(kind: URLSessionError.ErrorKind.timeout)
return (data, response, error)
}
}
}
I only have experience with kitura web framework and this is where i faced the problem. I suppose that similar problems exist in all other swift web frameworks.
In Vapor, you can use the Droplet's client to make synchronous requests.
let res = try drop.client.get("https://httpbin.org")
print(res)
Additionally, you can use the Portal class to make asynchronous tasks synchronous.
let res = try Portal.open { portal in
asyncClient.get("https://httpbin.org") { res in
portal.close(with: res)
}
}
Your three-step problem can be solved via the use of a completion handler, i.e., a callback handler a la Node.js convention:
import Foundation
import Kitura
import HeliumLogger
import LoggerAPI
let session = URLSession(configuration: URLSessionConfiguration.default)
Log.logger = HeliumLogger()
let router = Router()
router.get("/test") { req, res, next in
let datatask = session.dataTask(with: URL(string: "http://www.example.com")!) { data, urlResponse, error in
try! res.send(data: data!).end()
}
datatask.resume()
}
Kitura.addHTTPServer(onPort: 3000, with: router)
Kitura.run()
This is a quick demo of a solution to your problem, and it is by no means following best Swift/Kitura practices. But, with the use of a completion handler, I am able to have my Kitura app make an HTTP call to fetch the resource at http://www.example.com, wait for the response, and then send the result back to my app's client.
Link to the relevant API: https://developer.apple.com/reference/foundation/urlsession/1410330-datatask
Related
I have a lambda function which does a series of actions. I have a react application which triggers the lambda function.
Is there a way I can send a partial response from the lambda function after each action is complete.
const testFunction = (event, context, callback) => {
let partialResponse1 = await action1(event);
// send partial response to client
let partialResponse2 = await action2(partialResponse1);
// send partial response to client
let partialResponse3 = await action3(partialResponse2);
// send partial response to client
let response = await action4(partialResponse3);
// send final response
}
Is this possible in lambda functions? If so, how we can do this. Any ref docs or sample code would be do a great help.
Thanks.
Note: This is fairly a simple case of showing a loader with % on the client-side. I don't want to overcomplicate things SQS or step functions.
I am still looking for an answer for this.
From what I understand you're using API Gateway + Lambda and are looking to show the progress of the Lambda via UI.
Since each step must finish before the next step begin I see no reason not to call the lambda 4 times, or split the lambda to 4 separate lambdas.
E.g.:
// Not real syntax!
try {
res1 = await ajax.post(/process, {stage: 1, data: ... });
out(stage 1 complete);
res2 = await ajax.post(/process, {stage: 2, data: res1});
out(stage 2 complete);
res3 = await ajax.post(/process, {stage: 3, data: res2});
out(stage 3 complete);
res4 = await ajax.post(/process, {stage: 4, data: res3});
out(stage 4 complete);
out(process finished);
catch(err) {
out(stage {$err.stage-number} failed to complete);
}
If you still want all 4 calls to be executed during the same lambda execution you may do the following (this especially true if the process is expected to be very long) (and because it's usually not good practice to execute "long hanging" http transaction).
You may implement it by saving the "progress" in a database, and when the process is complete save the results to the database as well.
All you need to do is query the status every X seconds.
// Not real syntax
Gateway-API --> lambda1 - startProcess(): returns ID {
uuid = randomUUID();
write to dynamoDB { status: starting }.
send sqs-message-to-start-process(data, uuid);
return response { uuid: uuid };
}
SQS --> lambda2 - execute(): returns void {
try {
let partialResponse1 = await action1(event);
write to dynamoDB { status: action 1 complete }.
// send partial response to client
let partialResponse2 = await action2(partialResponse1);
write to dynamoDB { status: action 2 complete }.
// send partial response to client
let partialResponse3 = await action3(partialResponse2);
write to dynamoDB { status: action 3 complete }.
// send partial response to client
let response = await action4(partialResponse3);
write to dynamoDB { status: action 4 complete, response: response }.
} catch(err) {
write to dynamoDB { status: failed, error: err }.
}
}
Gateway-API --> lambda3 -> getStatus(uuid): returns status {
return status from dynamoDB (uuid);
}
Your UI Code:
res = ajax.get(/startProcess);
uuid = res.uuid;
in interval every X (e.g. 3) seconds:
status = ajax.get(/getStatus?uuid=uuid);
show(status);
if (status.error) {
handle(status.error) and break;
}
if (status.response) {
handle(status.response) and break;
}
}
Just remember that lambda's cannot exceed 15 minutes execution. Therefore, you need to be 100% certain that whatever the process does, it never exceeds this hard limit.
What you are looking for is to have response expose as a stream where you can write to the stream and flush it
Unfortunately its not there in Node.js
How to stream AWS Lambda response in node?
https://docs.aws.amazon.com/lambda/latest/dg/programming-model.html
But you can still do the streaming if you use Java
https://docs.aws.amazon.com/lambda/latest/dg/java-handler-io-type-stream.html
package example;
import java.io.InputStream;
import java.io.OutputStream;
import com.amazonaws.services.lambda.runtime.RequestStreamHandler;
import com.amazonaws.services.lambda.runtime.Context;
public class Hello implements RequestStreamHandler{
public void handler(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
int letter;
while((letter = inputStream.read()) != -1)
{
outputStream.write(Character.toUpperCase(letter));
}
}
}
Aman,
You can push the partial outputs into SQS and read the SQS messages to process those message. This is a simple and scalable architecture. AWS provides SQS SDKs in different languages, for example, JavaScript, Java, Python, etc.
Reading and writing into SQS is very easy using SDK and that too can be implemented in serverside or in your UI layer (with proper IAM).
I found AWS step function may be what you need:
AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly.
Check this link for more detail:
In our example, you are a developer who has been asked to create a serverless application to automate handling of support tickets in a call center. While you could have one Lambda function call the other, you worry that managing all of those connections will become challenging as the call center application becomes more sophisticated. Plus, any change in the flow of the application will require changes in multiple places, and you could end up writing the same code over and over again.
Problem:
When I call a request from iOS swift based app, then the server is responding two responses.
Inputs:
In my request, am sending some user values including one base64 image string. I already ensure that my app is calling request only one time.
Outputs:
When we opened the server log, it printed two set of request and response. But, difference that the first one is not having base64 image string and second one is having it. Thats why am receiving two different responses.
Questions:
What end is causing this problem - Front / back end?
Note:
I given front code below but I can’t provide back end code.
let task = urlSession.dataTask(with: urlRequest, completionHandler: {
(data, response, error) in
if error != nil
{
print("Error ==",error!.localizedDescription);
onFailure(error!.localizedDescription)
}
else
{
let httpResponse = response as! HTTPURLResponse
let statusCode = httpResponse.statusCode
// For some critical cases:
//print("Status code: ", statusCode)
//print("http Response: ", httpResponse)
// JSON serialize
do {
let jsonResponse = try JSONSerialization.jsonObject(with: data!, options: .allowFragments)
print("Server Response == ",jsonResponse)
onSuccess(statusCode, jsonResponse)
}
catch
{
onFailure("JSON Parser Error")
}
}
})`
I have implemented a ReST API in Go using go-gin and I am trying to test a handler function which looks like the following
func editNameHandler(c *gin.Context) {
// make a ReST call to another server
callToAnotherServer()
c.Status(200)
}
I want to to mock callToAnotherServer method so that my test case doesn't call the 3rd party server at all.
My test case looks like
func TestSeriveIdStatusRestorePatch(t *testing.T) {
// Request body
send := strings.NewReader(`{"name":"Robert"}`
// this function sends an HTTP request to the API which ultimately calls editNameHandler
// Ignore the variables.The variables are retrieved in code this is to simplify question
ValidTokenTestPatch(API_VERSION+"/accounts/"+TestAccountUUID+"/students/"+TestStudentId, t, send, http.StatusOK)
}
I went through Mock functions in Go which mentions how we can pass a function to mock. I am wondering how we can pass a function while sending http request? How can I mock function in such case. What is the best practice?
I don't think there is single response for this question, but I'll share my approach on how I'm currently doing Dependency Injection on Go with go-gin (but should be the nearly the same with any other router).
From a business point of view, I have a struct that wraps all access to my services which are responsible for business rules/processing.
// WchyContext is an application-wide context
type WchyContext struct {
Health services.HealthCheckService
Tenant services.TenantService
... whatever
}
My services are then just interfaces.
// HealthCheckService is a simple general purpose health check service
type HealthCheckService interface {
IsDatabaseOnline() bool
}
Which have mulitple implementations, like MockedHealthCheck, PostgresHealthCheck, PostgresTenantService and so on.
My router than depends on a WchyContext, which the code looks like this:
func GetMainEngine(ctx context.WchyContext) *gin.Engine {
router := gin.New()
router.Use(gin.Logger())
router.GET("/status", Status(ctx))
router.GET("/tenants/:domain", TenantByDomain(ctx))
return router
}`
Status and TenantByDomain act like a handler-factory, all it does is create a new handler based on given context, like this:
type statusHandler struct {
ctx context.WchyContext
}
// Status creates a new Status HTTP handler
func Status(ctx context.WchyContext) gin.HandlerFunc {
return statusHandler{ctx: ctx}.get()
}
func (h statusHandler) get() gin.HandlerFunc {
return func(c *gin.Context) {
c.JSON(200, gin.H{
"healthy": gin.H{
"database": h.ctx.Health.IsDatabaseOnline(),
},
"now": time.Now().Format("2006.01.02.150405"),
})
}
}
As you can see, my health check handler doesn't care about concrete implementation of my services, I just use it whatever is in the ctx.
The last part depends on current execution environment. During automated tests I create a new WchyContext using mocked/stubbed services and send it to GetMainEngine, like this:
ctx := context.WchyContext{
Health: &services.InMemoryHealthCheckService{Status: false},
Tenant: &services.InMemoryTenantService{Tenants: []*models.Tenant{
&models.Tenant{ID: 1, Name: "Orange Inc.", Domain: "orange"},
&models.Tenant{ID: 2, Name: "The Triathlon Shop", Domain: "trishop"},
}}
}
router := handlers.GetMainEngine(ctx)
request, _ := http.NewRequest(method, url, nil)
response := httptest.NewRecorder()
router.ServeHTTP(response, request)
... check if response matches what you expect from your handler
And when you setup it to really listen to a HTTP port, the wiring up looks like this:
var ctx context.WchyContext
var db *sql.DB
func init() {
db, _ = sql.Open("postgres", os.Getenv("DATABASE_URL"))
ctx = context.WchyContext{
Health: &services.PostgresHealthCheckService{DB: db},
Tenant: &services.PostgresTenantService{DB: db}
}
}
func main() {
handlers.GetMainEngine(ctx).Run(":" + util.GetEnvOrDefault("PORT", "3000"))
}
There are a few things that I don't like about this, I'll probably refactor/improve it later, but it has been working well so far.
If you want to see full code reference, I'm working on this project here https://github.com/WeCanHearYou/wchy
Hope it can help you somehow.
I want to send message using xmpp to openFire everything works perfect even i can receive message. but not able to send i don't know why? i tried this code:
#IBAction func SendMessageClicked(_ sender: AnyObject) {
let message = messageTextField.text
var clientJid: XMPPJID!
clientJid = XMPPJID.init(string: "Bure#ip-772-99-99-99.ec3.internal")
let senderJID = clientJid
let msg = XMPPMessage(type: "chat", to: senderJID)
msg?.addBody(message)
stream?.send(msg)
}
As it doesnot throw any error but message does not send.
Plese help.
let xMessage = XMPPMessage(type: "chat", to: XMPPJID(string: clientJid))
xMessage.addBody(message)
xMessage.addOriginId(stream.generateUUID)
stream.send(xMessage)
I had the same problem and I just found the issue. Make sure the connection is established and authentication is done completely before trying to send messages. To do that you can use these XMPPStreamDelegate functions:
func xmppStreamDidConnect(_ stream: XMPPStream!) {
//Connection is now established
}
func xmppStreamDidAuthenticate(_ sender: XMPPStream!) {
//Athentication is done. Now you can send messages.
}
I'm using devise on ruby on rails for authentication. Taking it one step at a time, I have disabled the cookie authentication in order to test retrieving results prior to authentication.
If I go to my browser and navigate to the url that Alamofire is visiting, I get results in JSON format like this :
{"id":250,"name":null,"username":"walker","bio":null,"gender":null,"birth_date":null,"profile_image_url":null}
I'm requesting the alamofire request like this:
Alamofire.request(requestPath, method: .get, parameters: [:], encoding: JSONEncoding.default, headers: [:]).responseJSON { (response) in
if (response.result.isFailure) {
completion(false, "")
} else {
if let result = response.result.value {
completion(true, result)
}
}
}
This is all inside of another method which simply provides with a completion handler as you can see inside of the completion handler of the Alamofire request.
I get an error every single time.
The error says:
responseSerializationFailed : ResponseSerializationFailureReason
What am i doing wrong?
This error indicates that your response is not a JSON formatted data(or something wrong with your API Response), try to use something like post man to check your API response and to make sure every thing is ok before requesting with to swift