cppcms website doesn't include an example about using sessions in asynchronous mode. How I can create an asynchronous session management system using cppcms?
Added later:
I used this code for saving a session:
session()["name"] = ...
session().save();
and somewhere I placed this:
if(!session().load() || !session().is_set("name"))
std::cerr<<"error";
When I run program it shows error.
this is my config file (session section):
"expire": "renew",
"timeout": 604800,
"location": "both",
"client" : {
"hmac": "sha1",
"hmac_key": "...",
},
"server":{
"storage": "files"
}
See the section with heading "Now let's create our main asynchronous function", it does provide session and bind socket to session.
http://cppcms.com/wikipp/en/page/cppcms_1x_aio
Just read the manuals:
http://cppcms.com/cppcms_ref/latest/classcppcms_1_1session__interface.html#ae63e68dd2ec1d615f5a6a85bcee36605
You need to call session().load() before using session object.
By default the session configuration is disabled. Please enable that first. See the following for reference. http://cppcms.com/wikipp/en/page/cppcms_1x_config#session. After you have configured session. The rest is the same as I previously stated. Session management is described in detailed over here. http://cppcms.com/wikipp/en/page/secure_programming
Related
I am trying to switch profiles in code using appsettings.json for a .net core web service.
If I new up an instance of the client in code eg:
using var client = new AmazonSecretsManagerClient();
client.ListSecretsAsync(listRequest, default)
.GetAwaiter()
.GetResult();
this takes the default profile that was set up and works fine.
Instead of using the default profile, I want to control the profile from json file. I am trying to follow this:
https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-config-netcore.html
Instead of newing up the client, I am using dependency injection
public SecretsService(IAmazonSecretsManager secretsManagerClient)
{
_secretsManagerClient = secretsManagerClient;
}
Here is the config:
"AWS": {
"Profile": "default",
"Region": "us-east-2"
}
When I make a call using,
_secretsManagerClient.ListSecretsAsync(listRequest, default)
.GetAwaiter()
.GetResult();
it gives an exception
No such host is known. (secretsmanager.defaultregion.amazonaws.com:443)
What am I missing?
Check your appsettings.Development.json file. Some templates have the region set as DefaultRegion and so you have to either update it or remove it to let appsettings.json take precedence.
Based on Stackdriver, I want to send notifications to my Centreon monitoring (behind Nagios) for workflow reasons, do you have any idea on how to do so?
Thank you
Stackdriver alerting allows webhook notifications, so you can run a server to forward the notifications anywhere you need to (including Centreon), and point the Stackdriver alerting notification channel to that server.
There are two ways to send external information in the Centreon queue without a traditional passive agent mode.
First, you can use the Centreon DSM (Dynamic Services Management) addon.
It is interesting because you don't have to register a dedicated and already known service in your configuration to match the notification.
With Centreon DSM, Centreon can receive events such as SNMP traps resulting from the detection of a problem and assign the event dynamically to a slot defined in Centreon, like a tray event.
A resource has a set number of “slots” on which alerts will be assigned (stored). While this event has not been taken into account by human action, it will remain visible in the Centreon web frontend. When the event is acknowledged, the slot becomes available for new events.
The event must be transmitted to the server via an SNMP Trap.
All the configuration is made through Centreon web interface after the module installation.
Complete explanations, screenshots, and tips are described on the online documentation: https://documentation.centreon.com/docs/centreon-dsm/en/latest/user.html
Secondly, Centreon developers added a Centreon REST API you can use to submit information to the monitoring engine.
This feature is easier to use than the SNMP Trap way.
In that case, you have to create both host/service objects before any API utilization.
To send status, please use the following URL using POST method:
api.domain.tld/centreon/api/index.php?action=submit&object=centreon_submit_results
Header
key value
Content-Type application/json
centreon-auth-token the value of authToken you got on the authentication response
Example of service body submit: The body is a JSON with the parameters provided above formatted as below:
{
"results": [
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "Memory",
"status": "2"
"output": "The service is in CRITICAL state"
"perfdata": "perf=20"
},
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "fake-service",
"status": "1"
"output": "The service is in WARNING state"
"perfdata": "perf=10"
}
]
}
Example of body response: :: The response body is a JSON with the HTTP return code, and a message for each submit:
{
"results": [
{
"code": 202,
"message": "The status send to the engine"
},
{
"code": 404,
"message": "The service is not present."
}
]
}
More information is available in the online documentation: https://documentation.centreon.com/docs/centreon/en/19.04/api/api_rest/index.html
Centreon REST API also allows to get real-time status for hosts, services and do the object configuration.
I've just created my first (custom) still. I've set the function up in Lambda by uploading a zip file containing my index.js and all the necessary code required, including node_modules and the base Alexa skill that mine is a child of (as per the tutorials). I made sure I zipped up the files and sub-folders, not the folder itself (as I can see this is a common cause of similar errors) but when I create the skill and test in the web harness with a sample utterance I get:
remote endpoint could not be called, or the response it returned was
invalid.
I'm not sure how to debug this as there's nothing logged in CloudWatch.
I can see in the Lambda request that my slot value is translated/parsed successfully and the intentname is correct.
In AWS Lambda I can invoke the function successfully both with a LaunchRequest and another named intent. From the developer console though, I get nothing.
I've tried copying the JSON from the lambda test (that works) to the developer portal and I get the same error. Here is a sample of the JSON I'm putting in the dev portal (that works in Lambda)
{
"session": {
"new": true,
"sessionId": "session1234",
"attributes": {},
"user": {
"userId": null
},
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.149e75a3-9a64-4224-8bcq-30666e8fd464"
}
},
"version": "1.0",
"request": {
"type": "LaunchRequest",
"requestId": "request5678"
}
}
The first step in pursuing this problem is probably to test your lambda separate from your skill configuration.
When looking at your lambda function in the AWS console, note the 'test' button at the top, and next to it there is a drop down with an option to configure a test event. If you select that option you will find that there are preset test events for Alexa. Choose 'alexa start session' and then choose 'save and test' button.
This will give you more detailed feedback about the execution of your lambda.
If your lambda works fine here then the problem probably lies in your skill configuration, so I would go back through whatever tutorial and documentation you were using to configuration your skill and make sure you did it right.
When you write that the lambda request looks fine I assume you are talking about the service simulator, so that's a good start, but there could still be a problem on the configuration tab.
We built a tool for local skill development and testing.
BST Tools
Requests and responses from Alexa will be sent directly to your local server, so that you can quickly code and debug without having to do any deployments. I have found this to be very useful for our own development.
Let me know if you have any questions.
It's open source: https://github.com/bespoken/bst
In AWS API Gateway, I have a GET method that invokes a lambda function.
When I test the method in the API Gateway dashboard, the lambda function executes successfully but API Gateway is not mapping the context.success() call to a 200 result despite having default mapping set to yes.
Instead I get this error:
Execution failed due to configuration error: No match for output mapping and no default output mapping configured
This is my Integration Response setup:
And this is my method response setup:
Basically I would expect the API Gateway to recognize the successful lambda execution and then map it by default to a 200 response but
that doesn't happen.
Does anyone know why this isn't working?
I have same issue while uploading api using serverless framework. You can simply follow bellow steps which resolve my issue.
1- Navigate to aws api gateway
2- find your api and click on method(Post, Get, Any, etc)
3- click on method response
4- Add method with 200 response.
5- Save it & test
I had the similar issue, got it resolved by adding the method response 200
This is a 'check-the-basics' type of answer. In my scenario, CORS and the bug mentioned above were not at issue. However, the error message given in the title is exactly what I saw and what led me to this thread.
Instead, (an an API Gateway newb) I had failed to redeploy the deployment. Once I did that, everything worked.
As a bonus, for Terraform 0.12 users, the magic you need and should use is a triggers parameter for your aws_api_gateway_deployment resource. This will automatically redeploy for you when other related APIGW resources change. See the TF documentation for details.
There was an issue when saving the default integration response mapping which has been resolved. The bug caused requests to API methods that were saved incorrectly to return a 500 error, the CloudWatch logs should contain:
Execution failed due to configuration error:
No match for output mapping and no default output mapping configured.
Since the 'ENABLE CORS' saves the default integration response, this issue also appeared in your scenario.
For more information, please refer to the AWS forums entry: https://forums.aws.amazon.com/thread.jspa?threadID=221197&tstart=0
Best,
Jurgen
What worked for me:
1. In Api Gateway Console created OPTIONS method manually
2. In the Method Response section under created OPTIONS method added 200 OK
3. Selected Option method and enabled CORS from menu
I found the problem:
Amazon had added a new button in the API-Gateway resource configuration
titled 'Enable CORS'. I had earlier clicked this however once enabled
there doesn't seem to be a way to disable it
Enabling CORS using this button (Instead of doing it manually which is what I ended up doing) seems to cause an internal server error even on a
successful lambda execution.
SOLUTION: I deleted the resource and created it again without clicking
on 'Enable CORS' this time and everything worked fine.
This seems to be a BUG with that feature but perhaps I just don't
understand it well enough. Comment if you have any further information.
Thanks.
Check the box which says "Use Lambda Proxy integration".
This works fine for me. For reference, my lambda function looks like this...
def lambda_handler(event, context:
# Get params
param1 = event['queryStringParameters']['param1']
param2 = event['queryStringParameters']['param2']
# Do some stuff with params to get body
body = some_fn(param1, param2)
# Return response object
response_object = {}
response_object['statusCode'] = 200
response_object['headers'] = {}
response_object['headers']['Content-Type']='application/json'
response_object['body'] = json.dumps(body)
return response_object
I just drop this here because I faced the same issue today and in my case was that we are appending at the end of the endpoint a /. So for example, if this is the definition:
{
"openapi": "3.0.1",
"info": {
"title": "some api",
"version": "2021-04-23T23:59:37Z"
},
"servers": [
{
"url": "https://api-gw.domain.com"
}
],
"paths": {
"/api/{version}/intelligence/topic": {
"get": {
"parameters": [
{
"name": "username",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "version",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "x-api-key",
"in": "header",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "X-AWS-Cognito-Group-ID",
"in": "header",
"schema": {
"type": "string"
}
}
],
...
Remove any / at the end of the endpoint: /api/{version}/intelligence/topic. Also do the same in uri on apigateway-integration section of the swagger + api gw extensions json.
Make sure your ARN for the role is indeed a role (and not, e.g., the policy).
We have a setup where we have a web frontend programmed in Django and a backend written in C++ that parses data for us.
The frontend uses Celery in combination with Redis for asynchronous tasks.
Since it would be convenient in some situations, I was wondering today if it is possible to trigger a Celery task from within C++.
Since there is a Redis client available for C++, I am pretty sure that this is possible, if the correct messages are sent to Redis, however, I was not able to find any information on this anywhere.
My next step would be to try and dig the needed Information out of the Celery source code, but before I do that:
Does anybody have any information on this subject that could help me or get me started or is there even someone who has done this before?
Any help is appreciated. (Also if you got a reason why this will not work.)
Thank you.
I had a similar need to trigger a celery task from logstash. Basically, I had to create a message that looked something like this:
{
"body": "base_64_encoded_string (see below)",
"content-type": "application/json",
"properties": {
"body_encoding": "base64",
"correlation_id":"f009c9e0-0ca6-42a6-a046-3d0e53e06060",
"reply_to":"e1eb91f0-6780-4c34-b633-7ef9a46baf5e",
"delivery_mode":2,
"delivery_tag": "7788b924-a7fe-4c9a-839e-1c7ca602dbba",
"delivery_info": {
"priority":0,
"routing_key":"default",
"exchange":"default"
}
}
}
In this case, the decoded body translates to:
{
"args": ["meta_val","doc_value"],
"task":"goldstone.compliance.tasks.process_fim_event",
"id":"23deb69e-49c1-4a61-8639-d4627d0fc591"
}
If you have kwargs on your task, you can add a kwargs: {"key": "value", ...} to your body.
The body above maps triggers a task called process_fim_event. The task def looks like:
#task()
def process_fim_event(meta, doc):
...
The easiest way of doing this that I know of is to use flower, a HTTP Celery API. With flower you can create a task with anything that can make an HTTP request. One example from the Github Readme:
$ curl -X POST -d '{"args":[1,2]}' http://localhost:5555/api/task/async-apply/tasks.add
So, the idea is that your c++ app would make an HTTP request against the flower api, which would then insert the task into your Redis queue.