ejabberd Integration with Riak - amazon-web-services

I'm building a chat application leveraging ejabberd as the server, with Riak as the backend NoSQL db (on AWS). I could get a single-node ejabberd and a Riak cluster working correctly separately but somehow not able to have chat data pushed onto the database by ejabberd.
As a first shot, I want to store offline messages in Riak. I've written a simple ejabberd module (mod_offline_riak) attaching to the offline_message_hook. This gets called successfully when an offline message is sent, but the moment the riak connection is made (in riakc_pb_socket:start_link), I get an undef error in the ejabberd logs. Relevant code snippets pasted below.
Furthermore, the ejabberd default installation (from code, v15.04) does not contain the riak-erlang-client dependency, so I've even included that in the ejabberd rebar.config.script and done a re-make / re-install but to no help.
start(_Host, _Opt) ->
?INFO_MSG("Starting module mod_offline_riak ...", []),
ejabberd_hooks:add(offline_message_hook, _Host, ?MODULE, save_message, 0),
ok.
save_message(From, To, Packet) ->
?INFO_MSG("Entered function save_message ...", []),
create_riak_object(To, Packet),
create_riak_object(To, Packet) ->
?INFO_MSG("Entered function create_riak_object ...", []),
{ok, Pid} = riakc_pb_socket:start_link("***IP of one of the Riak nodes***", 8087),
PollToBeSaved = riakc_obj:new(?DATA_BUCKET, To, Packet),
riakc_pb_socket:put(Pid, PollToBeSaved),
ok.
The error in the ejabberd log is:
2015-12-28 16:06:02.166 [error] <0.503.0>#ejabberd_hooks:run1:335 {undef,
[{riakc_pb_socket,start_link,["***Riak IP configured in the module***",8087],
[]},{mod_offline_riak,create_riak_object,2,[{file,"mod_offline_riak.erl"},
{line,39}]},{mod_offline_riak,save_message,3,[{file,"mod_offline_riak.erl"},
{line,23}]},{ejabberd_hooks,safe_apply,3,[{file,"src/ejabberd_hooks.erl"},
{line,385}]},{ejabberd_hooks,run1,3,[{file,"src/ejabberd_hooks.erl"},{line,332}]},
{ejabberd_sm,route,3,[{file,"src/ejabberd_sm.erl"},{line,115}]},
{ejabberd_local,route,3,[{file,"src/ejabberd_local.erl"},{line,112}]},
{ejabberd_router,route,3,[{file,"src/ejabberd_router.erl"},{line,74}]}]}
Afraid I've been struggling with this for the last few days and still learning my steps around Erlang / Riak, so appreciate any help here.
On a slight tangential, I plan to allow embedding of media attachments too in the chat messages too - I presume the recommendation would be to instead use Riak CS instead of Riak - I'll be leveraging S3 in the background.
Finally, is there any good ejabberd / Riak / Redis integration material that I can refer that folks are aware of? I understand there was recently a talk in London but I'm based in NY, so missed that... :-(
Thanks again for all your help...

undef means the module/function is not available. Presumably, you do not have build the riakc_pb_socket module or the beam file is not in your Erlang path.

Related

Attempting to put a CloudWatch Agent onto an On-Premise server. Issues with cwagent-otel-collector

As said in the title, I am attempting to put a CloudWatch Agent (CW agent) on my On-Premise-Server (OPS).
After running this line of code that I got from the AWS User Guide to start the CW agent:
& $Env:ProgramFiles\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1 -m ec2 -a start
I got this error:
****** processing cwagent-otel-collector ******
cwagent-otel-collector will not be started as it has not been configured yet.
****** processing amazon-cloudwatch-agent ******
AmazonCloudWatchAgent has been started
I did/do not know what this was so I searched and found that when someone else had this issue, they did not create a config file.
I did create a config file (named config.json by default) using the configuration wizard and I am still having the issue.
I have tried looking into a number of pages on that user guide, but nothing has resolved the issue.
Thank you in advance for any assistance you can provide.
This message is info and not an error.
CloudWatch agent is bundled with the AWS OpenTelemetry collector agent. They're actually two agents. CloudWatch agent and Otel collector have separate configuration files. If you provide a config for one and not the other, it will only start the one that is configured. This is expected behavior.
Thank you for taking the time to answer. I have since resolved the issue (recently).
Everything from the command I was using to the path where the file resided was incorrect.
Starting over and going through all the steps again with background information helped.
The first installation combined with learning everything for the first time produced the issue.
Anyone having this issue I recommend that when you hit a wall like this you start over. I know it is not what anyone wants to do, but in the end it saved time.

Why "directory-expression" is not creating dynamic folder ddMMyyyy using regex on linux server?

I am using spring jms integration to consume mq message and file integration outbound-channel-adapter move error mq message (if any) to dynamic folder error/filtered/ddMMyyyy using "directory-expression" and regex.
mqMsgFeedChannel checks if mq message is subject to filter or not? if yes then it should be move msg to today's date 04102018 (ddMMyyyy) folder where myproject.consumer.filter.output.dir=/our_nas_drive/our_project/data/our_env/error/filtered.
<int:filter id="mqMessageFilter" expression="${myproject.consumer.filter.expression:true}" input-channel="mqMsgFeedChannel" output-channel="mqMsgProcessChannel" discard-channel="filterFileOutputChannel" throw-exception-on-rejection="false"/>
I can see mq depth is increased by 1 when a message is published and reduced to 0 once consume, but configuration is not creating dynamic directory. Also checked by creating directory manually even now its not moving error message file to static 04102018 folder.
I checked if its permission issue, by "directory" attribute for static folder without regex and I can confirm that its not a permission issue.
<int-file:outbound-channel-adapter id="filterFileOutput" channel="filterFileOutputChannel" auto-create-directory="true" directory-expression="'${myproject.consumer.filter.output.dir}'+new java.text.SimpleDateFormat('ddMMyyyy').format(new java.util.Date())" delete-source-files="true"/>
Looks like this regular expression works well in windows but not on Linux once deployed on server.
Please advice, thank you in advance
Tom

Sitecore 8.2 Separate Processing Server

I am using Sitecore 8.2 Update 6. Earlier my CM, Processing and Reporting roles are on single CM server. Now I just need to use separate Processing server and my Reporting and CM will be on one server.
I have configured my processing server as mentioned in the following url:
https://doc.sitecore.net/sitecore_experience_platform/82/setting_up_and_maintaining/xdb/configuring_servers/configure_a_processing_server
and configured my connection strings as per the following url:
https://doc.sitecore.net/sitecore_experience_platform/81/setting_up_and_maintaining/xdb/configuring_servers/database_connection_strings_for_configuring_servers
Now I have couple of questions:
1) IS there any change required in my CM or CD to know about my separate processing server
2) How can I test whether my processing server is doing the required tasks.
Thanks,
Nicks
Your CM and CD do not need to know about the processing server, but you need to make sure that processing functions are not enabled on the CM or CD.
You will know if processing is working by looking at the logs and seeing if the pipelines are executing and not throwing errors.
You will also see analytics data being processed and showing up in the reporting database. If you are not seeing analytics data, this is an indication you might have errors in processing.
Note that there are several possible reasons reporting data might not be working, but if it is succeeding at getting your new analytics data than processing is running.

Does AWS Application Load balancer still suffers HOL Blocking with clients?

We recently switched to Amazon's new ALBs (Application Load Balancer) and were excited to get the benefits of http2 multiplexing. But it appears that there is still a HOL (Head of line) blocking issues going on.
The client is able to request the images in parallel but still has to wait for a period of time before it can begin downloading the image. My guess it that because AWS's Application Load Balancer terminates http2 and then talks to the ec2 instances via http/1 creating the HOL delay.
I may be reading this chart wrong, and if so could someone please explain to me why content isn't being downloaded faster, in other words I would expect the green portion to be smaller and the blue bar to appear sooner. Does networking between the load balacner and ec2 instance not suffer from HOL? Is there some magic stuff happening?
I believe that what you see might be related to how current Chrome (around v54) prioritizes requests.
Exclusive dependencies
Chrome uses HTTP/2's exclusive dependencies (that can be dynamically re-arranged as the page is parsed and new resources are discovered) in order to have all streams depend strictly on one another. That means that all resources are effectively sent one after the other. I wrote a small utility that parses the output of chrome://net-internals/#http2 in order to show the tree for a given page: https://github.com/deweerdt/h2priograph.
A dependency from stream to another can either be exclusive or not. Section 5.3.1 of RFC 7540 covers that part: https://www.rfc-editor.org/rfc/rfc7540#section-5.3.1, here's the example they give when adding a new stream D to A:
Not exclusive:
A A
/ \ ==> /|\
B C B D C
Exclusive:
A
A |
/ \ ==> D
B C / \
B C
Example
Let's take this page for example
Chrome only uses the exclusive dependencies, so the dependency tree might at first look like:
HTML
|
PNG
|
PNG
When the javascript is discovered, Chrome reprioritizes it above the images (because the javascript is more likely to affect how the page is rendered), so Chrome puts the JS right below the HTML:
HTML
|
JS
|
PNG
|
PNG
Because all the requests are daisy chained, it might look, as in the waterfall you posted that requests are executed one after another like in HTTP/1, but they're not, because they can be re-ordered on the fly and all the requests are sent to the browser ASAP.
Firefox on the other hand will have resources of the same type share the same priority so you should be able to see some interlacing there.
Please note that what Chrome does isn't equivalent to what used to happen with HTTP/1 since all the requests have been sent to the server, so the server always has something to be sent on the wire. In addition to that, ordering is dynamic, so if a higher priority resource is discovered in the page, Chrome will re-order the priority tree so that this new resource takes precedence over existing lower priority resources.
Might be super late for an answer, but still.
AWS ELB documentation clearly states that:
The backend connection (between ALB and target) supports HTTP 1.1 only
The backend connection does not support HTTP 1.1 pipelining (which might have somewhat replicated HTTP 2 multiplexing).
So essentially what you observe are multiplexed requests on the frontend connection that are processed one by one by ALB and waiting for backend reply until next frontend request is processed and routed to target.
You can refer to "HTTP connections" section from documentation.

Web Service Data Stage

I get the following error:Service Invocation Exeption i am working with Version 8.7 IBM InfoSphere DataStage and QualityStage Designer and using a server job and there, i have 1 sequential file, web service, sequential file.
Any idea what could be the reason of this error ?
Make sure you have chosen proper DataStage job type and your stage that operates on web service is configured properly.
You should also check the DataStage logs to get more information about root cause of the error.