Spring XD Dynamic Module ClassLoader Issue - classloader

According to the documentation we can load jars dynamically at module creation time by exploiting the attribute module.classloader in the .properties file :
http://docs.spring.io/spring-xd/docs/1.3.1.RELEASE/reference/html/#module-class-loading
I spent two days trying to test this feature. It does not work. The option module.classloader seems to be simply ignored
I did not find any string named module.classloader in the XD code. But I found another one called module.classpath in this class:
https://github.com/spring-projects/spring-xd/blob/master/spring-xd-module/src/main/java/org/springframework/xd/module/options/ModuleUtils.java
The code in the above class seems to match the documentation. But unfortunalletely it does not work too. My classes are not found and I get java.lang.ClassNotFoundException
I have module option named dir4jars where I put the jars to load at creation time (when I issue job create --name xx --defintion ..). It's a directory, and I have tested the following possibilities, with both module.classpath and module.classloader :
module.classpath=${dir4jars}/*.jar
module.classloader=${dir4jars}/*.jar
.
.
job create --name jobName --definition "myJobModuleName --dir4jars=C:/ELS/Flash/libxd" --deploy
and
job create --name jobName --definition "myJobModuleName --dir4jars=file:C:/ELS/Flash/libxd" --deploy
I need the dir4jars to be absolute and outside XD home.
So my questions:
What's the right option to use for this dynamic load? module.classpath or module.classloader ?
How can I set an absolute directory as I mentioned above?
Many thanks.

I think it has to be module.classpath and module.classloader looks like a mistake in the documentation. Does this work when you explicitly use module.classpath=file:C:/ELS/Flash/libxd?
As a side note: Please consider using Spring Cloud Data Flow which is the successor of Spring XD.

Related

Where do I tell AWS SAM which file to chose depending on the stage/environment?

In the app.js, I want to require a different "config" file depending on the stage/account.
for example:
dev account: const config = require("config-dev.json")
prod account: const config = require("config-prod.json")
At first I tried passing it using build --container-env-var-file but after getting undefined when using process.env.myVar, I think that env file is used at the build stage and has nothing to do with my function, but I could use it in the template creation stage..
So I'm looking now at deploy and there are a few different things that seem relevant, but it's quite confusing to chose which one is relevant for my use case.
There is the config file, in which case, I have no idea how to configure it since I'm in a pipeline context, so where would I instruct my process to use the correct json?
There is also parameters, and mapping.
My json is not just a few vars. its a bit of a complex object. nothing crazy not simple enough to pass the vars 1 by 1.
So I thought a single one containing the filename that I want to use could do the job
But I have no idea how to tell which stage of deployment I currently am in, or how to pass that value to access it from the lambda function.
I also faced this issue while exectuing aws lambda function locally.By this command my issue was solved.
try to configure your file using the sam build command

Ember test launch in CI/dev environment

I have this in my testem.js
launch_in_ci: ['Chromium'],
launch_in_dev: ['Chrome'],
Is there any way to run ember test and specify CI/dev environment?
I know I can use this solution but this looks like not the right way since I have configuration file.
Here is how testem chooses which config to use:
https://github.com/testem/testem/blob/50ca9c274ec904d77a90915840349142231aadff/lib/config.js#L294 (I just searched for launch_in_dev on their github
appMode is passed in to the constructor here: https://github.com/testem/testem/blob/50ca9c274ec904d77a90915840349142231aadff/lib/config.js#L44
that Config class is required here: https://github.com/testem/testem/blob/50ca9c274ec904d77a90915840349142231aadff/lib/api.js#L4
and constructed here: https://github.com/testem/testem/blob/50ca9c274ec904d77a90915840349142231aadff/lib/api.js#L53
So we need to find out how options is set and when setup is called.
options in Api is set here: https://github.com/testem/testem/blob/50ca9c274ec904d77a90915840349142231aadff/lib/api.js#L74-L86 (startDev and startCi -- these seem fairly specific -- hopefully we're close to finding the answer.)
those methods are both called here: https://github.com/testem/testem/blob/50ca9c274ec904d77a90915840349142231aadff/testem.js#L79-L81
and now what we're looking for is how progOptions gets built.
It comes from here: https://github.com/testem/testem/blob/50ca9c274ec904d77a90915840349142231aadff/testem.js#L5
which is from https://www.npmjs.com/package/commander
which means we need to read through all the following config below where commander is required.
App mode is set: https://github.com/testem/testem/blob/50ca9c274ec904d77a90915840349142231aadff/testem.js#L8-L52
in the evaluation of each of these.
Which maybe means that this is normally not an automated switch and ember is abstracting this for us.
So let's hop on over to ember-cli:
a search brought me to this file: https://github.com/ember-cli/ember-cli/blob/b24b73b388934796ca915ca665b48a27c857199b/lib/tasks/test.js#L13
but here it looks like ember is always running CI.
so.. idk. I'll leave this here for others to go spelunking, but I gotta do some chores now.

Explain how `<<: *name` makes a reference to `&name` in docker-compose?

Trying to understand how the docker-compose file was created as I want to replicate this into a kubernetes deployment yaml file.
In reference to a cookiecutter-django's docker-compose production.yaml file:
...
services:
django: &django
...
By docker-compose design, the name of service here is already defined as django but then I noticed this extra bit &django. This made me wonder why its here. Further down, I noticed the following:
...
celeryworker:
<<: *django
...
I don't understand how that works. The docker-compose docs have no reference or mention for using << let alone, making a reference to a named service like *django.
Can anyone explain how the above work and how do I replicate it to a kubernetes deployment or services yaml file (or both?) if possible?
Edit:
The question that #jonsharpe shared was similar but the answer wasn't clear to me on how its used.
There are three different things happening, and none of them are specifically compose syntax, rather they are yaml syntax.
First is defining an anchor with the & followed by a name. That's similar to defining a variable to use later in the yaml, with the value matching the value of the yaml object where it appears.
Next is the alias, specified with * and the same name as the anchor. That uses the anchor in the second location in the yaml file.
Last is a mapping merge using the << syntax, which merges all of the mapped values in the alias with the rest of the values in the current map, allowing you to override values in the saved anchor with values specific to that section of the compose file.
To dig more into this, try searching on "yaml anchors and aliases". The first hit for me is this blog post: https://medium.com/#kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd

From local scrapy to scrapy cloud (scraping hub) - Unexpected results

The scraper I deployed on Scrapy cloud is producing an unexpected result compared to the local version.
My local version can easily extract every field of a product item (from an online retailer) but on the scrapy cloud, the field "ingredients" and the field "list of prices" are always displayed as empty.
You'll see in a picture attached the two elements I'm always having empty as a result whereas it's perfectly working
I'mu using Python 3 and the stack was configured with a scrapy:1.3-py3 configuration.
I thought first it was in a issue with the regex and unicode but seems not.
So i tried everything : ur, ur RE.ENCODE .... and didn't work.
For the ingredients part, my code is the following :
data_box=response.xpath('//*[#id="ingredients"]').css('div.information__tab__content *::text').extract()
data_inter=''.join(data_box).strip()
match1=re.search(r'([Ii]ngr[ée]dients\s*\:{0,1})\s*(.*)\.*',data_inter)
match2=re.search(r'([Cc]omposition\s*\:{0,1})\s*(.*)\.*',data_inter)
if match1:
result_matching_ingredients=match1.group(1,2)[1].replace('"','').replace(".","").replace(";",",").strip()
elif match2 :
result_matching_ingredients=match2.group(1,2)[1].replace('"','').replace(".","").replace(";",",").strip()
else:
result_matching_ingredients=''
ingredients=result_matching_ingredients
It seems that the matching never occurs on scrapy cloud.
For prices, my code is the following :
list_prices=[]
for package in list_packaging :
tonnage=package.css('div.product__varianttitle::text').extract_first().strip()
prix_inter=(''.join(package.css('span.product__smallprice__text').re(r'\(\s*\d+\,\d*\s*€\s*\/\s*kg\)')))
prix=prix_inter.replace("(","").replace(")","").replace("/","").replace("€","").replace("kg","").replace(",",".").strip()
list_prices.append(prix)
That's the same story. Still empty.
I repeat : it's working fine on my local version.
Those two data are the only one causing issue : i'm extracting a bunch of other data (with Regex too) with scrapy cloud and I'm very satisfied with it ?
Any ideas guys ?
I work really often with ScrapingHub, and usually the way I do to debug is:
Check the job requests (through the ScrapingHub interface)
In order to check if there is not a redirection which makes the page slightly different, like a query string ?lang=en
Check the job logs (through the ScrapingHub interface)
You can either print or use a logger to check everything you want trough your parser. So if you really want to be sure the scraper display the same on local machine and on ScrapingHub, you can print(response.body) and compare what might cause this difference.
If you can not find, I'll try to deploy a little spider on ScrapingHub and edit this post if I can manage to have some time left today !
Check that Scrapping Hub’s logs are displaying the expected version of Python even if the stack is correctly set up in the project’s yml file.

Need python interface for moving a machine to another folder

I am trying to find a code support in python for moving a machine between Datacenter's folders without success, I saw in pysphere that you can define the folder just in the clone stage and not after machine already cloned.
This seems as a solution for my problem but it is in powershell, do anybody know a wrapping support for it in python
You can do this with pyVmomi. I would avoid pysphere because pyVmomi is maintained by VMWare and pysphere hasnt been updated in 4 years or more.
That said here is some sample code that uses pyVmomi
service_instance = connect.SmartConnect(host=args.host,
user=args.user,
pwd=args.password,
port=int(args.port))
search_index = service_instance.content.searchIndex
folder = search_index.FindByInventoryPath("LivingRoom/vm/new_folder")
vm_to_move = search_index.FindByInventoryPath("LivingRoom/vm/test-vm")
move_task = folder.MoveInto([vm_to_move])
In this example I create a ServiceInstance by connecting to a vCenter, next I grab an instance of the SearchIndex. The SearchIndex has several methods that can be used to locate your managed objects. In this example I decided to use the FindByInventoryPath method, but you could use any that will work for you. First I find the instance of the Folder named new_folder that I want to move my VirtualMachine into. Next I find the VirtualMachine I want to move. Finally I execute the Task that will move the vm for me. That task takes a param of the list of objects to be moved into the folder, and in this case its a single item list containing only the one vm I want to move. From here you can monitor the task if you want.
Keep in mind that if you use the FindByInventoryPath there are many hidden folders that are not visible from the GUI. I find that using the ManagedObjectBrowser is very helpful at times.
Helpful doc links:
https://github.com/vmware/pyvmomi/blob/master/docs/vim/SearchIndex.rst
https://github.com/vmware/pyvmomi/blob/master/docs/vim/Folder.rst
https://github.com/vmware/pyvmomi/blob/master/docs/vim/Task.rst