Cannot make gRPC postman request to nestjs server - postman

I've set up a nestjs server that handles gRPC requests. In order to do some ad-hoc debugging, I'm trying to use postman. However, whenever I try to send a request, postman returns the following error: Received RST_STREAM with code 2 triggered by internal client error: Protocol error.
This is my app.controller.ts file:
import { Metadata, ServerUnaryCall } from '#grpc/grpc-js';
import { Controller } from '#nestjs/common';
import { GrpcMethod } from '#nestjs/microservices';
import { BaseLoggingService } from './common/baseLogging.service';
import { CreateLogRequest, CreateLogResponse } from './generated/logs';
#Controller()
export class AppController {
constructor(private baseLoggingService: BaseLoggingService) {}
#GrpcMethod('LogService', 'CreateLog')
async createLog(
req: CreateLogRequest,
metadata: Metadata,
call: ServerUnaryCall<CreateLogRequest, CreateLogResponse>,
): Promise<CreateLogResponse> {
return await this.baseLoggingService.createLog(req);
}
}
The interfaces CreateLogRequest and CreateLogResponse are generated by protobuf-ts/plugin, and are based on the following .proto file:
/**
* Definitions of shared interfaces
**/
syntax = "proto3";
package logs;
enum LogLevel {
INFO = 0;
WARN = 1;
ERROR = 2;
DEBUG = 3;
}
message LogContext {
string sessionId = 1;
string requestId = 2;
string hostname = 3;
string podName = 4;
string grpcMethodName = 5;
uint32 durationMs = 6;
}
message ErrorData {
string name = 1;
string notificationCode = 2;
string stack = 3;
}
message CreateLogRequest {
LogLevel level = 1;
string service = 2;
int32 timestamp = 3;
string message = 4;
LogContext context = 5;
ErrorData errorData = 6;
}
message CreateLogResponse {
LogLevel level = 1;
string service = 2;
int32 code = 3;
}
service LogService {
rpc CreateLog (CreateLogRequest) returns (CreateLogResponse) {}
}
The server instance is running on a remote machine, so I am using ssh-tunneling to send the requests. Here is the postman request and response. There is no authentication in place.
I used server logs and the controller's createLog function does not get triggered at all when making the request. I even tried using grpcurl, which fails again, albeit for a differently inexplicable reason:
grpcurl -plaintext -d '{
"context": {
"durationMs": 4100079420,
"grpcMethodName": "magna ut commodo",
"hostname": "exercitation dolor",
"podName": "ad Duis non",
"requestId": "do",
"sessionId": "nostrud"
},
"errorData": {
"name": "nostrud enim Lorem consectetur",
"notificationCode": "in anim",
"stack": "incididunt"
},
"level": 0,
"message": "eu qui dolore laborum eiusmod",
"service": "sunt",
"timestamp": -499959849
}' localhost:6666 LogService/CreateLog
Failed to dial target host "localhost:6666": context deadline exceeded

Related

NOT_FOUND(5): Instance Unavailable. HTTP status code 404

I got this error when task is trying processing.
This is my nodejs code
async function quickstart(message : any) {
// TODO(developer): Uncomment these lines and replace with your values.
const project = "";//projectid
const queue = "";//queuename
const location = "";//region
const payload = JSON.stringify({
id: message.id,
data: message.data,
attributes: message.attributes,
});
const inSeconds = 180;
// Construct the fully qualified queue name.
const parent = client.queuePath(project, location, queue);
const task = {
appEngineHttpRequest: {
headers: {"Content-type": "application/json"},
httpMethod: protos.google.cloud.tasks.v2.HttpMethod.POST,
relativeUri: "/api/download",
body: "",
},
scheduleTime: {},
};
if (payload) {
task.appEngineHttpRequest.body = Buffer.from(payload).toString("base64");
}
if (inSeconds) {
task.scheduleTime = {
seconds: inSeconds + Date.now() / 1000,
};
}
const request = {
parent: parent,
task: task,
};
console.log("Sending task:");
console.log(task);
// Send create task request.
const [response] = await client.createTask(request);
console.log(`Created task ${response.name}`);
console.log("Created task");
return true;
}
The task is created without issue. However, it didnt trigger my cloud function and I got 404 or unhandled exception in my cloud logs. I have no idea whats going wrong.
I also did test with gcloud cli without the issue. Gcloud cli able to trigger my cloud function based on provided url.

Akka Cluster starding not able to register to Coordinator

I am trying to create an Akka Sharding Cluster. I want to use proxy only mode on one of the nodes just to route the message to the shard regions. I am getting the following Warning:
[WARN] [02/11/2019 17:04:17.819] [ClusterSystem-akka.actor.default-dispatcher-21] [akka.tcp://ClusterSystem#127.0.0.1:2555/system/sharding/ShardnameProxy] Trying to register to coordinator at [Some(ActorSelection[Anchor(akka.tcp://ClusterSystem#127.0.0.1:2551/), Path(/system/sharding/ShardnameCoordinator/singleton/coordinator)])], but no acknowledgement. Total [1] buffered messages.
**Main.java: ** Starts the cluster using the configuration from application.conf(code added latter)
object Main {
val shardName = "Shardname"
val role = "Master"
var shardingProbeLocalRegin: Option[ActorRef] = None
def main(args: Array[String]): Unit = {
val conf = ConfigFactory.load()
val system = ActorSystem("ClusterSystem",conf.getConfig("main"))
ClusterSharding(system).start(shardName,Test.props,ClusterShardingSettings(system),ShardDetails.extractEntityId,ShardDetails.extractShardId)
}
}
Test.java : Entity for the Sharding Cluster
object Test {
def props: Props = Props(classOf[Test])
class Test extends Actor {
val log = Logger.getLogger(getClass.getName)
override def receive = {
case msg: String =>
log.info("Message from " + sender().path.toString + " Message is " + msg)
sender() ! "Done"
}
}
}
MessageProducer.java(Proxy Only Mode) Message Producer sends a message to the Shard every second.
object MessageProducer {
var shardingProbeLocalRegin: Option[ActorRef] = None
object DoSharding
def prop:Props = Props(classOf[MessageProducer])
var numeric : Long = 0
def main(args: Array[String]): Unit = {
val conf = ConfigFactory.load
val system = ActorSystem("ClusterSystem",conf.getConfig("messgaeProducer"))
ClusterSharding(system).startProxy(Main.shardName,None,extractEntityId,extractShardId)
shardingProbeLocalRegin = Some(ClusterSharding(system).shardRegion(Main.shardName))
val actor = system.actorOf(Props[MessageProducer],"message")
}
}
class RemoteAddressExtensionImpl(system: ExtendedActorSystem) extends Extension {
def address = system.provider.getDefaultAddress
}
object RemoteAddressExtension extends ExtensionKey[RemoteAddressExtensionImpl]
class MessageProducer extends Actor{
val log = Logger.getLogger(getClass.getName)
override def preStart(): Unit = {
println("Starting "+self.path.address)
context.system.scheduler.schedule(10 seconds,1 second ,self,DoSharding)
}
override def receive = {
case DoSharding =>
log.info("sending message" + MessageProducer.numeric)
MessageProducer.shardingProbeLocalRegin.foreach(_ ! "" + (MessageProducer.numeric))
MessageProducer.numeric += 1
}
}
**application.conf: ** Configuration File
main {
akka {
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = on
netty.tcp {
hostname = "127.0.0.1"
port = 2551
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2551"
]
sharding.state-store-mode = ddata
auto-down-unreachable-after = 1s
}
akka.extensions = ["akka.cluster.metrics.ClusterMetricsExtension", "akka.cluster.ddata.DistributedData"]
}
}
messgaeProducer {
akka {
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = on
netty.tcp {
hostname = "192.168.2.96"
port = 2554
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2551"
//, "akka.tcp://ClusterSystem#127.0.0.1:2552"
]
sharding.state-store-mode = ddata
auto-down-unreachable-after = 1s
}
akka.extensions = ["akka.cluster.metrics.ClusterMetricsExtension", "akka.cluster.ddata.DistributedData"]
}
}
Am I doing anything wrong? Is there any other way to apply for this approach. My main aim is to avoid Single Point of failure for my cluster. If any node goes down then it should not affect any other state. Can anyone help me with this?
Is it solved?
If not, please check your akka.cluster configuration.
You have to set config like this. It works to me
for proxy
akka.cluster {
roles = ["Proxy"]
sharding {
role = "Master"
}
}
for master
akka.cluster {
roles = ["Master"]
sharding {
role = "Master"
}
}

Identity Server 3 Token Request from POSTMAN Http Tool

Using POSTMAN, I'm struggling to to retrieve my Identity Server 3 token.
Error code is : 400 Bad Request
Here are the details:
POST /identity/connect/token HTTP/1.1
Host: localhost:44358
Content-Type: application;x-www-form-urlencoded
Cache-Control: no-cache
Postman-Token: 57fc7aef-0006-81b2-8bf8-8d46b77d21d1
username=MYUSER-ID&password=MY-PASSWORD&grant_type=password&client_id=rzrwebguiangulajsclient&client_secret=myclientsecret&redirect_uri=https://localhost:44331/callback
I've done something similar with a simple Visual Studio 2015 WebApi project, where the end point was \token.
Any guidance/advice is appreciated...
regards,
Bob
The minimum required for a Resource Owner OAuth request is the following (line breaks added for readability):
POST /connect/token
Header
Content-Type: application/x-www-form-urlencoded
Body
username=MYUSER-ID
&password=MY-PASSWORD
&grant_type=password
&client_id=rzrwebguiangulajsclient
&client_secret=myclientsecret
&scope=api
Off the bat you are not requesting a scope in your request. Otherwise there is most probably something wrong in the configuration of your client within Identity Server.
Your best bet would be to enable logging and look at what comes back when this request errors.
Update: also, please don't use the ROPC grant type
I'm happy to say that we got Postman to work.
It turns out I was so close to getting Postman to work with Identity Server 3 Authorization.
The final piece to the solution was setting the Postman client Flow to Flow = Flows.ClientCredentials (see the postmantestclient client definition below):
using System.Collections.Generic;
using IdentityServer3.Core.Models;
namespace MyWebApi.MyIdentityServer.Config
{
public static class Clients
{
public static IEnumerable<Client> Get()
{
return new[]
{
new Client
{
ClientId = MyConstants.MyIdentityServer.MyWebGuiClientId,
ClientName = "My Web Gui Client",
Flow = Flows.Implicit,
AllowAccessToAllScopes = true,
IdentityTokenLifetime = 300,
AccessTokenLifetime = 300, //5 minutes
RequireConsent = false,
// redirect = URI of the Angular application
RedirectUris = new List<string>
{
MyConstants.MyIdentityServer.MyWebGuiUri + "callback.html",
// for silent refresh
MyConstants.MyIdentityServer.MyWebGuiUri + "silentrefreshframe.html"
},
PostLogoutRedirectUris = new List<string>()
{
MyConstants.MyIdentityServer.MyWebGuiUri + "index.html"
}
},
new Client
{
ClientId = MyConstants.MyIdentityServer.SwaggerClientId,
ClientName = "Swagger Client",
Flow = Flows.Implicit,
AllowAccessToAllScopes = true,
IdentityTokenLifetime = 300,
AccessTokenLifetime = 300,
RequireConsent = false,
// redirect = URI of the Angular application
RedirectUris = new List<string>
{
"https://localhost:44358/swagger/ui/o2c-html"
}
},
new Client
{
ClientId = "postmantestclient",
ClientName = "Postman http test client",
Flow = Flows.ClientCredentials,
AllowAccessToAllScopes = true,
IdentityTokenLifetime = 300,
AccessTokenLifetime = 300, //5 minutes
RequireConsent = false,
ClientSecrets = new List<Secret>
{
new Secret("PostmanSecret".Sha256())
},
RedirectUris = new List<string>()
{
"https://www.getpostman.com/oauth2/callback"
}
}
};
}
}
}

google glass notification mismatch development and production server

I try to send notification through menu in my app using mirror api. for development environment I am using a proxy server but in production I am using just SSl cause it is public domain. my callback URL for this two section is bellow
// development
callbackUrl = "https://3a4660af.ngrok.com/notify";
// production
if (callbackUrl.equals("https://www.mydomain.com:8080/notify")) {
callbackUrl = "https://www.mydomain.com:8443/notify";
} else {
callbackUrl = "https://www.mydomain.com:8443/notify";
}
LOG.info("\ncallbackUrl : " + callbackUrl);
Subscription subscription = new Subscription();
subscription.setCollection(collection);
subscription.setVerifyToken(userId);
subscription.setCallbackUrl(callbackUrl);
subscription.setUserToken(userId);
getMirror(credential).subscriptions().insert(subscription)
.execute();
But when I try to read notification from notification class I got mismatch so that the notification action is not working. the notification log in bellow
//development
got raw notification : { "collection": "timeline",
"itemId": "6fa2445e-b14f-46b2-9cff-f0d44d63ecab",
"operation": "UPDATE", "verifyToken": "103560737611562800385",
"userToken": "103560737611562800385",
"userActions": [ { "type": "CUSTOM", "payload": "dealMenu" } ]}
//production
got raw notification : "collection": "timeline",
"operation": "UPDATE",
"userToken": "103560737611562800385", { "payload": "dealMenu" ]null
in Notification class
BufferedReader notificationReader = new BufferedReader(
new InputStreamReader(request.getInputStream()));
String notificationString = "";
// Count the lines as a very basic way to prevent Denial of Service
// attacks
int lines = 0;
while (notificationReader.ready()) {
notificationString += notificationReader.readLine();
lines++;
LOG.info("\ngot raw notification during read : "
+ notificationString);
// No notification would ever be this long. Something is very wrong.
if (lines > 1000) {
throw new IOException(
"Attempted to parse notification payload that was unexpectedly long.");
}
}
LOG.info("\ngot raw notification : " + notificationString);
JsonFactory jsonFactory = new JacksonFactory();
LOG.info("\ngot jsonFactory : " + jsonFactory);
// If logging the payload is not as important, use
// jacksonFactory.fromInputStream instead.
Notification notification = jsonFactory.fromString(notificationString,
Notification.class);
LOG.info("\n got notification " + notification);
In production I cannot received all the perimeters what I need. Why this mismatch happen???

What's the best way to process array of JSON messages posted to Nodejs server?

A client sends an array of JSON messages to be stored at Nodejs server; but client will require some sort of acknowledgement for each message (through unique id), that it was properly stored at server, and hence doesn't need to be sent again.
At server I want to parse the JSON array, then loop through it, store each message in db, store response for this message in JSON array named responses, and finally send this responses array to the client. But as the db operations are async, all other code is executed before any result returned from db storing methods. My question is how to keep updating the responses array, untill all db operations are complete?
var message = require('./models/message');
var async = require('async');
var VALID_MESSAGE = 200;
var INVALID_MESSAGE = 400;
var SERVER_ERROR = 500;
function processMessage(passedMessage, callback) {
var msg = null;
var err = null;
var responses = [];
isValidMessage(passedMessage, function(err, result) {
if(err) {
callback( createResponse(INVALID_MESSAGE, 0) );
}else{
var keys = Object.keys(result);
for(var i=0, len = keys.length; i<len; i++) {
async.waterfall([
//store valid json message(s)
function storeMessage(callback) {
(function(oneMessage) {
message.processMessage(result[i], function(res) {
callback(res, result[i].mid, callback);
});
})(result[i]);
console.log('callback returns from storeMessage()');
},
//create a json response to send back to client
function createResponse(responseCode, mid, callback) {
var status = "";
var msg = "";
switch(responseCode) {
case VALID_MESSAGE: {
status = "pass";
msg = "Message stored successfuly.";
break;
}
case INVALID_MESSAGE: {
status = "fail";
msg = "Message invalid, please send again with correct data.";
break;
}
case SERVER_ERROR: {
status = "fail";
msg = "Internal Server Error! please contact the administrator.";
break;
}
default: {
responseCode = SERVER_ERROR;
status = "fail";
msg = "Internal Server Error! please contact the administrator.";
break;
}
}
var response = { "mid": mid, "status": status, "message": msg, "code": responseCode};
callback(null, response );
}
],
function(err, result) {
console.log('final callback in series: ', result);
responses.push(result);
});
}//loop ends
}//else ends
console.log('now we can send response back to app as: ', responses);
});//isValid finishes
}
To expand on what lanzz said, this is a pretty common solution (start a number of "tasks" all at the same time, and then use a common callback to determine when they're all done). Here's a quick paste of my function from my userStats function, which gets the number of active users (DAU, WAU, and HAU):
exports.userStats = function(app, callback)
{
var res = {'actives': {}},
day = 1000 * 60 * 60 * 24,
req_model = Request.alloc(app).model,
actives = {'DAU': day, 'MAU': day*31, 'WAU': day*7},
countActives = function(name, time) {
var date = new Date(new Date().getTime() - time);
req_model.distinct('username',{'date': {$gte: date}}, function(e,c){
res.actives[name] = parseInt(c ? c.length : 0, 10);
if(Object.keys(actives).length <= Object.keys(res.actives).length)
callback(null, res);
});
};
var keys = Object.keys(actives);
for(var k in keys)
{
countActives(keys[k], actives[keys[k]]);
}
};
Only send your responses array when the number of items in it equals the number of keys in your result object (i.e. you've gathered responses for all of them). You can check if you're good to send after you push each response in the array.