Using wso2mb message broker server with a php client - wso2

I trying to connect a php client (using php-amqplib lib) to a server based on wso2mb (wso2 message broker version 3.1.0).
I couldnt succed in that when using the amqp_consumer.php and always locked with this error :
INFO {org.wso2.andes.server.protocol.AMQProtocolEngine} - Unable to create SASL Server:AMQPLAIN whilst processing:[ConnectionStartOkBodyImpl: clientProperties={product=[LONG_STRING: AMQPLib], platform=[LONG_STRING: PHP], version=[LONG_STRING: 2.6], information=[LONG_STRING: ], copyright=[LONG_STRING: ], capabilities=[FIELD_TABLE: {authentication_failure_close=[BOOLEAN: true], publisher_confirms=[BOOLEAN: true], consumer_cancel_notify=[BOOLEAN: true], exchange_exchange_bindings=[BOOLEAN: true], basic.nack=[BOOLEAN: true], connection.blocked=[BOOLEAN: true]}]}, mechanism=AMQPLAIN, response=[5, 76, 79, 71, 73, 78, 83, 0, 0, 0, 5, 97, 100, 109, 105, 110, 8, 80, 65, 83, 83, 87, 79, 82, 68, 83, 0, 0, 0, 5, 97, 100, 109, 105, 110], locale=en_US]
[2016-11-04 08:05:26,901] INFO {org.wso2.andes.server.protocol.AMQProtocolEngine} - Closing connection due to: org.wso2.andes.AMQConnectionException: Unable to create SASL Server:AMQPLAIN [error code 506: resource error]
I using thos params in confing.php as conexions params
require_once __DIR__ . '/../vendor/autoload.php';
define('HOST', 'localhost');
define('PORT', 5692);
define('USER', 'admin');
define('PASS', 'admin');
define('VHOST', '/');
My questions :
1. Could you recommand any php library / tutorial to establish communication between some php code and wso2mb ?
2. What are the connexion method that are allowed by wso2mb ? (PLAIN, AMQPLAIN...?)
3. Help plz :)

Related

Getting BulkwriteError when using MongoDb with djangorestframework-simplejwt?

I am using MongoDB and SimpleJWT in DjangoREST to authenticate and authorize users. I tried to implement user logout, whereby in SimpleJWT it's basically blacklisting a user token. When the first user logs in, everything seems okay and their refresh token is added to the Outstanding token table. But when I try to log in a second user, I get the below error:
raise BulkWriteError(full_result)
pymongo.errors.BulkWriteError: batch op errors occurred, full error: {'writeErrors': [{'index': 0, 'code': 11000, 'keyPattern': {'jti_hex': 1}, 'keyValue': {'jti_hex': None}, 'errmsg': 'E11000 duplicate key error collection: fsm_database.token_blacklist_outstandingtoken index: token_blacklist_outstandingtoken_jti_hex_d9bdf6f7_uniq dup key: { jti_hex: null
}', 'op': {'id': 19, 'user_id': 7, 'jti': '43bccc686fc648f5b60b22df3676b434', 'token': 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTY1OTY1NDUzNCwiaWF0IjoxNjU5NTY4MTM0LCJqdGkiOiI0M2JjY2M2ODZmYzY0OGY1YjYwYjIyZGYzNjc2YjQzNCIsInVzZXJfaWQiOjd9.aQmt5xAyncfpv_kDD2pF7iS98Hld98LhG6ng-rCW23M', 'created_at': datetime.datetime(2022,
8, 3, 23, 8, 54, 125539), 'expires_at': datetime.datetime(2022, 8, 4, 23, 8, 54), '_id': ObjectId('62eb00064621b38109bbae16')}}], 'writeConcernErrors': [], 'nInserted': 0, 'nUpserted': 0, 'nMatched': 0, 'nModified': 0, 'nRemoved': 0, 'upserted': []}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\MR.Robot\.virtualenvs\fsm-GjGxZg3c\lib\site-packages\djongo\cursor.py", line 51, in execute
self.result = Query(
File "C:\Users\MR.Robot\.virtualenvs\fsm-GjGxZg3c\lib\site-packages\djongo\sql2mongo\query.py", line 784, in __init__
self._query = self.parse()
File "C:\Users\MR.Robot\.virtualenvs\fsm-GjGxZg3c\lib\site-packages\djongo\sql2mongo\query.py", line 869, in parse
raise exe from e
djongo.exceptions.SQLDecodeError:
Keyword: None
Sub SQL: None
FAILED SQL: INSERT INTO "token_blacklist_outstandingtoken" ("user_id", "jti", "token", "created_at", "expires_at") VALUES (%(0)s, %(1)s, %(2)s, %(3)s, %(4)s)
Params: [7, '43bccc686fc648f5b60b22df3676b434', 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTY1OTY1NDUzNCwiaWF0IjoxNjU5NTY4MTM0LCJqdGkiOiI0M2JjY2M2ODZmYzY0OGY1YjYwYjIyZGYzNjc2YjQzNCIsInVzZXJfaWQiOjd9.aQmt5xAyncfpv_kDD2pF7iS98Hld98LhG6ng-rCW23M', datetime.datetime(2022, 8, 3, 23, 8, 54, 125539), datetime.datetime(2022, 8, 4, 23, 8, 54)]
Version: 1.3.6
MongoDB seems to have a problem inserting the token for the second user in the outstanding table.
How can I fix this?
So I asked the library maintainers and they said that they don't support MongoDB. Check out this issue.

Problems with gcovr and regex expressions

I am trying to merge coverage data from specific files from multiple runs into a single "report". In each project I generate test coverage for different files, each in a separate project folder and then at the end I pick and choose what files to include in my final project. If I use the following command line where I select each file with a --filter [complete path and filename], it works perfectly and I get a combined report with just these files. I also see it pick the files up in the log since I have -v option enabled shown below.
gcovr -g -k -s --html --html-details -o code_coverage_summary2.html -r . --filter firmware/crc32/crc_unit_tests.X/mcc_generated_files/boot/boot_verify_crc32.c --filter firmware/sha256/sha_unit_tests.X/mcc_generated_files/boot/boot_verify_sha256.c --filter firmware/checksum/checksum_unit_tests.X/mcc_generated_files/boot/boot_verify_checksum16.c --filter firmware/command_processor/bootloader_pic_tb.X/mcc_generated_files/boot/boot_process.c -v > log4.txt
Log File From above run
(many many lines before)
currdir C:\Users\murphy\Downloads\archive (5)\archive\src\test
gcov_fname C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\sha256\sha_unit_tests.X\mcc_generated_files\boot\boot_verify_sha256.c.gcov
[' -', ' 0', 'Source', 'boot_verify_sha256.c\n']
source_fname None
root C:\Users\murphy\Downloads\archive (5)\archive\src\test
fname C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\sha256\sha_unit_tests.X\mcc_generated_files\boot\boot_verify_sha256.c
Parsing coverage data for file C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\sha256\sha_unit_tests.X\mcc_generated_files\boot\boot_verify_sha256.c
uncovered: {122}
covered: {49: 1, 54: 1, 55: 1, 57: 1, 59: 1, 60: 1, 61: 1, 64: 1, 65: 1, 70: 1, 75: 1, 80: 1, 81: 1, 83: 1, 85: 1, 88: 1, 90: 1, 92: 1, 95: 1, 97: 1, 100: 1, 102: 1, 105: 1, 107: 1, 110: 1, 112: 1, 115: 1, 117: 1, 120: 1, 125: 1, 127: 1, 132: 1}
branches: {}
noncode: {128, 129, 3, 131, 133, 6, 134, 9, 12, 23, 27, 30, 31, 33, 35, 37, 39, 44, 46, 48, 50, 51, 53, 56, 58, 62, 63, 66, 67, 69, 71, 72, 74, 76, 77, 79, 82, 84, 86, 87, 89, 91, 93, 94, 96, 98, 99, 101, 103, 104, 106, 108, 109, 111, 113, 114, 116, 118, 119, 121, 123, 124, 126}
Finding source file corresponding to a gcov data file
currdir C:\Users\murphy\Downloads\archive (5)\archive\src\test
gcov_fname C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\crc32\crc_unit_tests.X\mcc_generated_files\boot\boot_verify_crc32.c.gcov
[' -', ' 0', 'Source', 'boot_verify_crc32.c\n']
source_fname None
root C:\Users\murphy\Downloads\archive (5)\archive\src\test
fname C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\crc32\crc_unit_tests.X\mcc_generated_files\boot\boot_verify_crc32.c
Parsing coverage data for file C:\Users\murphy\Downloads\archive (5)\archive\src\test\firmware\crc32\crc_unit_tests.X\mcc_generated_files\boot\boot_verify_crc32.c
uncovered: set()
covered: {50: 1, 55: 1, 56: 1, 57: 1, 62: 1, 63: 1, 65: 1, 67: 1, 70: 1, 72: 1, 74: 1, 77: 1, 79: 1, 82: 1, 84: 1, 87: 1, 89: 1, 92: 1, 94: 1, 97: 1, 99: 1, 102: 1, 104: 1, 108: 1, 109: 1, 110: 1, 112: 1, 114: 1, 120: 1, 121: 1, 123: 1, 128: 1, 129: 1, 131: 1, 134: 1, 139: 1}
branches: {}
noncode: {3, 132, 133, 6, 135, 136, 9, 138, 12, 140, 141, 23, 28, 31, 33, 35, 38, 40, 45, 47, 49, 51, 52, 54, 58, 59, 61, 64, 66, 68, 69, 71, 73, 75, 76, 78, 80, 81, 83, 85, 86, 88, 90, 91, 93, 95, 96, 98, 100, 101, 103, 105, 106, 111, 113, 115, 122, 124}
Finding source file corresponding to a gcov data file
(many many lines after)
I have many more files to add and I really do not want to have a specific filter for each unique path & filename. What I want is a filter for all of the files that have the pattern boot_verify*.c in them. I have tried different regex expressions shown below but they all return no files selected.
--filter '(.+/)?boot_verify(.*)\.c$'
--filter '(.+/)?boot_verify_([a-z][A-Z][0-9]+)\.c$'
--filter '(.+/)?boot_verify_([a-z][A-Z][0-9]*)\.c$'
To force the issue I tried a regex with just a single file name below and even that did not work.
--filter '(.+/)?boot_verify_crc32\.c$'
I would really like to get the wild card pattern and the specific filename pattern working. What am I doing wrong?
Ken
When using regexes in command line arguments, they must be properly escaped.
When using Bash or another POSIX shell on Linux or other Unix-like operating systems including MacOS, surround the regex with single quotes. For example:
gcovr --filter '(.+/)?boot_verify_crc32\.c$'
See the Quoting section in the Bash Reference Manual or in the POSIX spec. In particular, escaping/quoting is necessary if the regex contains any of the following characters, or space characters:
| & ; < > ( ) $ ` \ " ' * ? [ # ˜ = %
When using cmd.exe on Windows, surround the regex with double quotes:
gcovr --filter "(.+/)?boot_verify_crc32\.c$"
I cannot find a good reference for this, though Parsing C Command-Line Arguments seems to contain the relevant rules.
When using PowerShell on Windows, surround the regex with single quotes:
gcovr --filter '(.+/)?boot_verify_crc32\.c$'
See the sections about Quoting Rules and about Parsing in the PowerShell documentation. In particular, escaping/quoting is necessary if the regex contains any of the following characters, or space characters:
' " ` , ; ( ) { } | & < > # #
Additionally, please note that gcovr 4.1 is from 2018. Since then, some fixes relating to Windows support and filter matching have been implemented. Assuming that you installed gcovr as pip3 install gcovr, you can upgrade to the latest stable release with:
pip3 install -U gcovr

Windows Command Line Group Policy - Using findstr/regex

Using Windows Command Line, I am listing out the Administrative Templates Group Policy settings using this command: gpresult /Scope User /v
An example of the output I am receiving is seen down below:
Administrative Templates
------------------------
GPO: Local Group Policy
Folder Id: Software\Policies\Microsoft\Windows\Control Panel\Desktop\ScreenSaverIsSecure
Value: 49, 0, 0, 0
State: Enabled
GPO: Local Group Policy
Folder Id: Software\Policies\Microsoft\Windows\Control Panel\Desktop\ScreenSaveTimeOut
Value: 57, 0, 48, 0, 48, 0, 0, 0
State: Enabled
GPO: Local Group Policy
Folder Id: Software\Policies\Microsoft\Windows\Control Panel\Desktop\SCRNSAVE.EXE
Value: 115, 0, 99, 0, 114, 0, 110, 0, 115, 0, 97, 0, 118, 0, 101, 0, 46, 0, 115, 0, 99, 0, 114, 0, 0, 0
State: Enabled
GPO: Local Group Policy
Folder Id: Software\Policies\Microsoft\Windows\Control Panel\Desktop\ScreenSaveActive
Value: 49, 0, 0, 0
State: Enabled
I am trying to modify the command to be able to use a findstr type way of listing a specific Administrative Template setting and have been unsuccessful in trying to find the correct syntax. For example, from the list seen above, I am trying to only get back the results of this setting only and nothing else:
GPO: Local Group Policy
Folder Id: Software\Policies\Microsoft\Windows\Control Panel\Desktop\ScreenSaveTimeOut
Value: 57, 0, 48, 0, 48, 0, 0, 0
State: Enabled
Any help is much appreciated.

no module named recording while trying to record the scrapy crawl

When I am trying to record a crawl using Frontera and scrapy it is giving an error saying no module named recording, however, i am unable to understand why is it coming up as I have followed the steps regarding recording from the official link.
Please help and thank you for the same.
The traceback is :
2017-07-04 15:38:57 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: alexa)
2017-07-04 15:38:57 [scrapy.utils.log] INFO: Overridden settings: {'AUTOTHROTTLE_MAX_DELAY': 3.0, 'DOWNLOAD_MAXSIZE': 10485760, 'SPIDER_MODULES': ['alexa.spiders'], 'CONCURRENT_REQUESTS_PER_DOMAIN': 10, 'CONCURRENT_REQUESTS': 256, 'RANDOMIZE_DOWNLOAD_DELAY': False, 'RETRY_ENABLED': False, 'DUPEFILTER_CLASS': 'alexa.bloom_filter1.BLOOMDupeFilter', 'AUTOTHROTTLE_START_DELAY': 0.25, 'REACTOR_THREADPOOL_MAXSIZE': 20, 'BOT_NAME': 'alexa', 'AJAXCRAWL_ENABLED': True, 'COOKIES_ENABLED': False, 'SCHEDULER': 'frontera.contrib.scrapy.schedulers.frontier.FronteraScheduler', 'DOWNLOAD_TIMEOUT': 120, 'AUTOTHROTTLE_ENABLED': True, 'NEWSPIDER_MODULE': 'alexa.spiders'}
2017-07-04 15:38:57 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.throttle.AutoThrottle']
Unhandled error in Deferred:
2017-07-04 15:38:57 [twisted] CRITICAL: Unhandled error in Deferred:
2017-07-04 15:38:57 [twisted] CRITICAL:
Traceback (most recent call last):
File "/root/scrapy/scrapy/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1386, in _inlineCallbacks
result = g.send(result)
File "/root/scrapy/scrapy/local/lib/python2.7/site-packages/scrapy/crawler.py", line 95, in crawl
six.reraise(*exc_info)
File "/root/scrapy/scrapy/local/lib/python2.7/site-packages/scrapy/crawler.py", line 77, in crawl
self.engine = self._create_engine()
File "/root/scrapy/scrapy/local/lib/python2.7/site-packages/scrapy/crawler.py", line 102, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "/root/scrapy/scrapy/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 69, in __init__
self.downloader = downloader_cls(crawler)
File "/root/scrapy/scrapy/local/lib/python2.7/site-packages/scrapy/core/downloader/__init__.py", line 88, in __init__
self.middleware = DownloaderMiddlewareManager.from_crawler(crawler)
File "/root/scrapy/scrapy/local/lib/python2.7/site-packages/scrapy/middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/root/scrapy/scrapy/local/lib/python2.7/site-packages/scrapy/middleware.py", line 34, in from_settings
mwcls = load_object(clspath)
File "/root/scrapy/scrapy/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 44, in load_object
mod = import_module(module)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named recording
I had the same problem following the official doc. I found a solutiobn following a scrapinghub blogpost.
The problem is that the official doc is deprecated. It uses a middleware which does not exist anymore:
SPIDER_MIDDLEWARES.update({
'frontera.contrib.scrapy.middlewares.recording.CrawlRecorderSpiderMiddleware': 1000,
})
DOWNLOADER_MIDDLEWARES.update({
'frontera.contrib.scrapy.middlewares.recording.CrawlRecorderDownloaderMiddleware': 1000,
})
Instead of using the recording middleware, you need to use the scheduler one.
SPIDER_MIDDLEWARES.update({
'frontera.contrib.scrapy.middlewares.schedulers.SchedulerSpiderMiddleware': 1000,
})
DOWNLOADER_MIDDLEWARES.update({
'frontera.contrib.scrapy.middlewares.schedulers.SchedulerDownloaderMiddleware': 1000,
})

How do i get request and response count in scrapyd?

I am trying to get request and response count in scrapyd,while running multiple spider means 8 spider dynamically.I am try to get those count using python.
following counts:
enter code here{'downloader/request_bytes': 130427,
'downloader/request_count': 273,
'downloader/request_method_count/GET': 273,
'downloader/response_bytes': 2169984,
'downloader/response_count': 273,
'downloader/response_status_count/200': 271,
'downloader/response_status_count/404': 2,
'dupefilter/filtered': 416,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 7, 21, 14, 21, 38, 13000),
'item_scraped_count': 119,
'log_count/DEBUG': 406,
'log_count/INFO': 44,
'offsite/domains': 9,
'offsite/filtered': 5865,
'request_depth_max': 14,
'response_received_count': 273,
'scheduler/dequeued': 273,
'scheduler/dequeued/memory': 273,
'scheduler/enqueued': 273,
'scheduler/enqueued/memory': 273,
'start_time': datetime.datetime(2015, 7, 21, 14, 16, 41, 144000)}
enter code here
Thanks,
Use the Stats Collection from Scrapy.
With this you can access the statistics dumped at the end to the console and if you write your own middleware you can combine the results of your 8 spiders together -- like in the example of the documentation.