So, I just started with Rasa this week, but I’m quite happy with the results so far, as in that it responds well et cetera. However, I now have custom actions in an actions.py file, but when I’m in the rasa shell it seems to miss that file entirely and asks for another input. (see the image below) It doesn't even give an error. What am I doing wrong?
I tried to run "rasa run actions" in another terminal, with an action_endpoint in the endpoints.yml file.
In the endpoints file:
`action_endpoint:
url: "http://localhost:5055/webhook"`
This is the part I'm running in the separate terminal:
`(actions) C:\.potato>python -m rasa_sdk --actions actions
2019-07-11 10:29:16 INFO rasa_sdk.endpoint - Starting action endpoint server...
2019-07-11 10:29:17 INFO rasa_sdk.executor - Registered function for 'action_validate_cuisine'.
2019-07-11 10:29:17 INFO rasa_sdk.executor - Registered function for 'action_search_restaurants'.
2019-07-11 10:29:17 INFO rasa_sdk.endpoint - Action endpoint is up and running. on ('0.0.0.0', 5055)`
This is the output in the other terminal:
`(cozmobot) C:\.potato>rasa shell
2019-07-11 10:49:36 INFO root - Starting Rasa Core server on http://localhost:5005
Bot loaded. Type a message and press enter (use '/stop' to exit):
Your input -> Hi!
Hey! What's up?
Your input -> I'm hungry
What kind of restaurant would you like?
Your input -> I would like italian
Your input -> <HERE AN ANSWER SHOULD BE GIVEN BY THE BOT VIA ACTIONS.PY>
Your input -> /stop
2019-07-11 10:50:19 INFO root - Killing Sanic server now.`
The third input should be answered by the bot with a correct restaurant. But for some reason it does not go there, without giving an error, and just asks for another input of the user.
You need to add the --endpoints flag to the command, i.e. run
rasa shell --endpoints endpoints.yml
Related
As said in the title, I am attempting to put a CloudWatch Agent (CW agent) on my On-Premise-Server (OPS).
After running this line of code that I got from the AWS User Guide to start the CW agent:
& $Env:ProgramFiles\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1 -m ec2 -a start
I got this error:
****** processing cwagent-otel-collector ******
cwagent-otel-collector will not be started as it has not been configured yet.
****** processing amazon-cloudwatch-agent ******
AmazonCloudWatchAgent has been started
I did/do not know what this was so I searched and found that when someone else had this issue, they did not create a config file.
I did create a config file (named config.json by default) using the configuration wizard and I am still having the issue.
I have tried looking into a number of pages on that user guide, but nothing has resolved the issue.
Thank you in advance for any assistance you can provide.
This message is info and not an error.
CloudWatch agent is bundled with the AWS OpenTelemetry collector agent. They're actually two agents. CloudWatch agent and Otel collector have separate configuration files. If you provide a config for one and not the other, it will only start the one that is configured. This is expected behavior.
Thank you for taking the time to answer. I have since resolved the issue (recently).
Everything from the command I was using to the path where the file resided was incorrect.
Starting over and going through all the steps again with background information helped.
The first installation combined with learning everything for the first time produced the issue.
Anyone having this issue I recommend that when you hit a wall like this you start over. I know it is not what anyone wants to do, but in the end it saved time.
My goal is to execute a benchmark deployed as a docker image. While doing so, I had too many issues, so I decided to first make something extremely trivial work.
So I decided to follow the guide in https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html
and use the "ping" example - it should just ping a domain couple of times, and stop.
The problem is, I always receive this message in the task status:
STOPPED (CannotStartContainerError: Error response from dae)
I tried it with various subnets and security groups, but the result is always the same - the task starts, and after a minute or two fails with the message above.
I even tried it on a fresh new AWS account, using these steps:
in https://us-east-2.console.aws.amazon.com/ecs/ created new cluster (networking only)
in task definitions, created a taskdef
with docker image alpine:latest, command ping -c 4 google.com
then I select the cluster, switch to "tasks" tab, and enter the run dialog
with one of pre-created subnets
After executing:
the task appears in the cluster's tasks list in PENDING state
it takes couple of minutes
eventually (using refresh button), it changes to the mentioned message - STOPPED (CannotStartContainerError: Error response from dae)
My guess is that the reason is:
either the task cannot download the image
or the instance cannot reach outside net
What can I be doing wrong? How to fix?
In my case too the log group was the problem. The one I had configured wasnt working. Hence I enabled the "Auto-configure CloudWatch Logs" option in the "Log Configuration" of the container settings.
Also if you open the stopped task, navigate to the container section, expand it, under the Details section you can see a detailed error message. Screenshot below
It could be a problem with the entry point as pointed in the comments of the question (in the task definition) Entrypoint: ["sh","-c"]
It could also be a bad reference, for example a wrong log group in the LogConfiguration or something similar.
I just create de group log in my cloudwatch console because it have not created, and now everything is going well.
All,
I am using Ubuntu OS in my AWS EC2 instance. My previous developer has created some custom messages once we SSHed into the Instance (Attached). But I would like to change it. Googled extensively, but no luck. Can someone help?Text I want to change is "Live 1A"
Edit the file /etc/motd
motd stands for Message Of The Day
MOTD(5) - Linux Programmer's Manual
NAME
motd - message of the day
DESCRIPTION
The contents of /etc/motd are displayed by login(1) after a successful login but just before it executes the login shell.
The abbreviation "motd" stands for "message of the day", and this file has been traditionally used for exactly that (it requires
much less disk space than mail to all users).
I have HDP Hortonworks 2.5.3 cluster, MAPREDUCE jobs in YARN are getting failed with the error:
java.io.IOException: DistCp failure: Job job_1498784032636_0015 has
failed:
Application application_1498784032636_0015 failed 2 times due to AM Container for appattempt_1498784032636_0015_000002 exited with
exitCode: -1000 For more detailed output, check the application
tracking page:
http://asterdart0005.labs.teradata.com:8088/cluster/app/application_1498784032636_0015 Then click on links to logs of each attempt. Diagnostics: Application
application_1498784032636_0015 initialization failed (exitCode=255)
with output: main : command provided 0 main : run as user is hdfs main
: requested yarn user is hdfs Requested user hdfs is banned
later i googled, it seems the hdfs user is banned user, as per the configuration in the file /etc/hadoop/conf/container-executor.cfg on each node, here is the content of the file:
yarn.nodemanager.local-dirs=/hadoop/yarn/local
yarn.nodemanager.log-dirs=/hadoop/yarn/log
yarn.nodemanager.linux-container-executor.group=hadoop
banned.users=hdfs,yarn,mapred,bin
min.user.id=500
I have modified the file in all nodes (namenode, edge and data nodes), as below:
yarn.nodemanager.local-dirs=/hadoop/yarn/local
yarn.nodemanager.log-dirs=/hadoop/yarn/log
yarn.nodemanager.linux-container-executor.group=hadoop
#banned.users=hdfs,yarn,mapred,bin
min.user.id=500
and restarted all services in HDFS, YARN and MapReduce2 through Ambari, after restarting my jobs are failing with the same error, and checked the /etc/hadoop/conf/container-executor.cfg content, looks it reset to initial stage as below:
yarn.nodemanager.local-dirs=/hadoop/yarn/local
yarn.nodemanager.log-dirs=/hadoop/yarn/log
yarn.nodemanager.linux-container-executor.group=hadoop
banned.users=hdfs,yarn,mapred,bin
min.user.id=500
any idea whats the solution here, to remove the users from the banned users list?
First thing to note is , you can not comment banned_users line, instead set correct users in value of banned_users list. (i.e. if you do not want to ban user hdfs then change banned.users=hdfs,yarn,mapred,bin to banned.users=yarn,mapred,bin). If you comment banned_users list then anyway by default hdfs, yarn and mapred will be banned.
Another thing, you can follow steps given below to propagate changes to all nodes.
Go to Ambari server node
Modify /var/lib/ambari-server/resources/common-services/YARN/<version>/package/templates/container-executor.cfg.j2 to configure banned users.
Restart Ambari server and all Ambari agents
Found the solution after searching, but leaving this here if somebody happens to run into similar kind of confusion. See resolution in the end.
I'm trying to figure out why AWS CloudWatch log service fails to understand the right timestamp for my log events. Currently all my events are being saved under Time 2017-01-01 no matter what the actual timestamp in the event is.
I'm feeding the log from syslog where docker is saving the logged events and I configured docker to put the timestamp in format:
170105/103242 (%y%m%d/%H%M%S)
I configured awslogs service with parameters:
datetime_format = %y%m%d/%H%M%S
I restarted the service and hit the server, but still when I go to CloudWatch and see the log entries, even entries that indeed start with timestamp 170105/103242 are actually saved as events that belong to date 2017-01-01 containing all events between 01-01 and 01-05
When I look at the awslogs.log I can see following lines:
2017-01-05 11:05:28,633 - cwlogs.push - INFO - 29223 - MainThread - Missing or invalid value for use_gzip_http_content_encoding config. Defaulting to using gzip encoding.
2017-01-05 11:05:28,633 - cwlogs.push - INFO - 29223 - MainThread - Using default logging configuration.
This makes me think that the configuration probably isn't actually reading/using the datetime_format but I don't understand why it decides to end up using default. I tried to put
use_gzip_http_content_encoding = true
under general settings, but it doesn't change the errors.
I am running out of ideas - has anyone managed to configure awslogger in a way where the datetime_format is actually used correctly?
Edit:
I'm currently hacking more console logs to local python2.7 push.py to see what is going on :)
RESOLVED:
Ok, problem was that I came into this project after the initial setup had been created and I had the impression that the logger was configured to use the .conf file in location:
/etc/awslogs/awslogs.conf
that was dynamically populated.
The environment had a script that gave this location to awslogs-agent-setup.py which tried to make the agent understand that configuration should be read from here.
However this script didn't actually do what it was supposed to do and when the service started, it actually read the config from
/var/awslogs/etc/awslogs.conf
Which contained the default values.
So the actual resolution was to change the datetime_format parameter in the default config and forget about the config I thought the service was using.
Add logging to /var/awslogs/lib/python2.7/site-packages/cwlogs/push.py and see how the actual config parameters are interpreted.
You will probably find out that the service is actually using configuration file at default location:
/var/awslogs/etc/awslogs.conf
and hence you have to edit configuration values there for them to be actually read.