MassTransit.RabbitMqTransport.RabbitMqAddressException: 'The invalid scheme was specified: amqps' - amazon-web-services

Trying to configure a .Net Core service to talk to an AmazonMQ - RabbitMQ instance. This is the config I'm using:
https://masstransit-project.com/usage/transports/rabbitmq.html#amazonmq-rabbitmq
cfg.Host(new Uri("amqps://b-mdefg-ff37-4e33-855c-577a0c749659.mq.us-westeast-2.amazonaws.com:5432/"), h =>
{
h.Username("MyUsername");
h.Password("XXXXXXXXX");
});
Nuget packages are:
MassTransit.AspNetCore v5.5.6
MassTransit.Autofac v5.3.2
MassTransit.RabbitMQ v5.3.2
MassTransit.SerilogIntegration v5.1.5
The exception being thrown is:
MassTransit.RabbitMqTransport.RabbitMqAddressException: 'The invalid scheme was specified: amqps'
Upgrading to the latest MassTransit caused a lot of unrelated issues. Any clues as to a workaround?

OK, solution was fairly straightforward, but google gave me nothing so perhaps this will save someone some time. I simply upgraded MassTransit.RabbitMQ to v5.5.6 and it all works now.

Related

Error when running Pytest with Delta Lake Tables

I am working in the VDI of a company and they use their own artifactory for security reasons.
Currently I am writing unit tests to perform tests for a function that deletes entries from a delta table. When I started, I received an error of unresolved dependencies, because my spark session was configured in a way that it would load jars from maven. I was able to solve this issue by loading these jars locally from /opt/spark/jars. Now my code looks like this:
class TestTransformation(unittest.TestCase):
#classmethod
def test_ksu_deletion(self):
self.spark = SparkSession.builder\
.appName('SPARK_DELETION')\
.config("spark.delta.logStore.class", "org.apache.spark.sql.delta.storage.S3SingleDriverLogStore")\
.config("spark.jars", "/opt/spark/jars/delta-core_2.12-0.7.0.jar, /opt/spark/jars/hadoop-aws-3.2.0.jar")\
.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")\
.config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")\
.getOrCreate()
os.environ["KSU_DELETION_OBJECT"]="UNITTEST/"
deltatable = DeltaTable.forPath(self.spark, "/projects/some/path/snappy.parquet")
deltatable.delete(col("DATE") < get_current()
However, I am getting the error message:
E py4j.protocol.Py4JJavaError: An error occurred while calling z:io.delta.tables.DeltaTable.forPath.
E : java.lang.NoSuchMethodError: org.apache.spark.sql.AnalysisException.<init>(Ljava/lang/String;Lscala/Option;Lscala/Option;Lscala/Option;Lscala/Option;)V
Do you have any idea by what this is caused? I am assuming it has to do with the way I am configuring spark.sql.extions and/or the spark.sql.catalog, but to be honest, I am quite a newb in Spark.
I would greatly appreciate any hint.
Thanks a lot in advance!
Edit:
We are using Spark 3.0.2 (Scala 2.12.10). According to https://docs.delta.io/latest/releases.html, this should be compatible. Apart from the SparkSession, I trimmed down the subsequent code to
df.spark.read.parquet(Path/to/file.snappy.parquet)
and now I am getting the error message
java.lang.IncompatibleClassChangeError: class org.apache.spark.sql.catalyst.plans.logical.DeltaDelete has interface org.apache.spark.sql.catalyst.plans.logical.UnaryNode as super class
As I said, I am quite new to (Py)Spark, so please dont hesitate to mention things you consider completely obvious.
Edit 2: I checked the Python path I am exporting in the Shell before running the code and I can see the following:
Could this cause any problem? I dont understand why I do not get this error when running the code within pipenv (with spark-submit)
It looks like that you're using incompatible version of the Delta lake library. 0.7.0 was for Spark 3.0, but you're using another version - either lower, or higher. Consult Delta releases page to find mapping between Delta version & required Spark versions.
If you're using Spark 3.1 or 3.2, consider using delta-spark Python package that will install all necessary dependencies, so you just import DeltaTable class.
Update: Yes, this happens because of the conflicting versions - you need to remove delta-spark and pyspark Python package, and install pyspark==3.0.2 explicitly.
P.S. Also, look onto pytest-spark package that can simplify specification of configuration for all tests. You can find examples of it + Delta here.

How to get Instance name in CommandBox CF 2018?

I recently started using commandBox to run ColdFusion in my local environment. After I played around for a while one issue I run into was related to adminapi. Here is the code that I use in one of my projects:
adminObj = createObject("component","cfide.adminapi.runtime");
instance = adminObj.getInstanceName();
This code is pretty straight forward and work just fine if I install traditional ColdFusion Developer version on my machine. I tried running this on commandBox: "app":{ "cfengine":"adobe#2018.0.7" }
After I run the code above this is the error message I got:
Object Instantiation Exception.
Class not found: com.adobe.coldfusion.entman.ProcessServer
The first debugging step was to check if component exists. I simply checked that like this:
adminObj = createObject("component","cfide.adminapi.runtime");
writeDump(adminObj);
The result I got on the screen was this:
component CFIDE.adminapi.runtime
extends CFIDE.adminapi.base
METHODS
Then I tried this to make sure method exists in the scope:
adminObj = createObject("component","cfide.adminapi.runtime");
writeDump(adminObj.getInstanceName);
The output looks like this, and that confirmed that method getInstanceName exists.
function getInstanceName
Arguments: none
ReturnType: any
Roles:
Access: public
Output: false
DisplayName:
Hint: returns the current instance name
Description:
The error is occurring only if I call the function getInstanceName(). Does anyone know what could be the reason of this error? Is there any solution for this particular problem? Like I already mentioned this method works in traditional ColdFusion 2018 developer environment. Thank you.
This is a bug in Adobe ColdFusion. The CFC you're creating is trying to create an instance of a specific Java class. I recognize the class name com.adobe.coldfusion.entman.ProcessServer as being related to their enterprise manager which controls features only available in certain versions of CF as well as features only available on their "standard" Tomcat installation (as opposed to a J2E deployment like CommandBox).
Please report this to Adobe in the Adobe bug tracker as they appear to be incorrectly detecting the servlet installation. I worked with them a couple years ago to improve their servlet detection on CommandBox, but I guess they still have some issues.
As a workaround, you could try and find out what jar that class is from on a non-CommandBox installation of Adobe ColdFusion and add it to the path, but I can't promise that it will work and that it won't have negative consequences.

DryIoc with MediatR: IAsyncRequestHandler Resolve exception

DryIoc can't seem to Resolve a IAsyncRequestHandler.It throws"An exception of type 'DryIoc.ContainerException' occurred in DryIoc.dll but was not handled in user code Additional information: Unable to resolve MediatR.IRequestHandler.
Where no service registrations found
and number of Rules.FallbackContainers: 0
and number of Rules.UnknownServiceResolvers: 0"which is weird because it should be resolving a IAsyncRequestHandler.Another weird thing is that the code runs well on Net.Fiddle (check here)
I'm using VS 2015 Update 3 in Windows 10 Home, MediatR 3.0.0, DryIoc.dll 2.10.0, .Net framework 4.5 (also tried with 4.6.1).Am I registering it in a wrong way? This should be straightforward.Related: What is the best way to register all handlers?

addCookie method throws 'addCookie called with non-cookie parameter'

I am struggling with this error message which has no direct forum discussion anywhere. From some of the things I saw around the web I tried:
Changing localhost to 127.0.0.1
Played around with browser.driver.manage() v/s browser.manage()
Cleaning out/updating my node modules
The same code runs on other machines with same configuration (Win 10, chromedriver 2 etc.)
The code essetially gets the cookie value through API calls before
and uses it as such:
browser.get(URL);
browser.manage().addCookie('cookie_name', value);
Any help would be appreciated!
Assumption that you are on Protractor 5.0.0. Adding cookies have been changed in selenium webdriver 3 and was noted as a breaking change in the Protractor changelog:
Before:
browser.manage().addCookie('testcookie', 'Jane-1234');
After:
browser.manage().addCookie({name:'testcookie', value: 'Jane-1234'});
The answer above did not work for me because i kept getting this error:
"Expected 2-6 arguments but got 1"
This is what I had to do to make it compile at least:
(browser.manage() as any).addCookie({name:'cookieName', value: 'cookieVal'});
Here is the thread I got this info from:
https://github.com/angular/protractor/issues/4148
It is still an open issue.

Redmine_RE plugin is not saving the data

i am using the Redmine.3.1.0.Then i have installed redmine_re plugin.But when i try to save the requirement using the Redmine_re plugin i am getting the following error
NameError (undefined local variable or method `connection' for #<ReArtifactRelationship:0x800ddb0>):
lib/plugins/acts_as_list/lib/active_record/acts/list.rb:220:in `bottom_item'
lib/plugins/acts_as_list/lib/active_record/acts/list.rb:214:in `bottom_position_in_list'
lib/plugins/acts_as_list/lib/active_record/acts/list.rb:205:in `add_to_list_bottom'
lib/redmine/sudo_mode.rb:63:in `sudo_mode'
pls suggest how to resolve this error
#ste26054
I did not develop this plugin, but I think the support for redmine 3.1.0 is only partial at the moment. (And you may get other errors even after fixing this).
I believe you are getting an error because of this: Deprecate #connection in favour of accessing it via the class
And your error is related to this file:
In this method:
def scope_condition()
"#{connection.quote_column_name("source_id")} = #{quote_value(self.source_id)}
AND
#{connection.quote_column_name("relation_type")} = #{quote_value(self.relation_type)}"
end
Try to add self.class. in front of connection
You may have to repeat this for other files in the code.
If your changes are working, I would suggest you to submit a pull request on their plugin github page :)