can't get session singleton in EcomDev PHPUnit test - unit-testing

After some serious debugging I have found I can't call to get a session object in Magento when running a test with EcomDev_PHPUnit module
any singleton/model call i.e. Mage::getSingleton('admin/session') or Mage::getModel('customer/session') eventually throws an exception from EcomDev_PHPUnit_Controller_Request_Http::getHttpHost() saying Cannot run controller test, because the host is not set for base url. which is caused because the $_SERVER['HTTP_HOST'] index is not set
Is there something in the configuration that I could be missing to cause this?

It is a problem related to Magento session initialization that is internal core part of Magento. In order to get rid of this error is to use a mock object, that does not use standard Magento session initialization process, since it uses core php session.
The replacement of session object with mock can be done by using the following code if you've extended your test case from one of the EcomDev_PHPUnit_Test_Case classes.
$sessionMock = $this->getModelMockBuilder('admin/session')
->disableOriginalConstructor() // This one removes session_start and other methods usage
->setMethods(null) // Enables original methods usage, because by default it overrides all methods
->getMock();
$this->replaceByMock('singleton', 'admin/session', $sessionMock);

You can set host in phpunit.xml
<phpunit ....>
...
<php>
<server name='HTTP_HOST' value='http://local.mysite.com' />
</php>
</phpunit>

Related

How to bring SOAP capability to payara/micro in EJB project

I have an EJB project providing webservices (both SOAP and REST) running inside a container with payara/micro as base image, since payara/micro does not come with JAXWS(SOAP support) feature out of the box, however, by adding
cxf-rt-frontend-jaxws
and
cxf-rt-transports-http
as dependencies into the project as well as following this tutorial and put the following code instead:
#Override
public void loadBus(ServletConfig servletConfig) {
super.loadBus(servletConfig);
Bus bus = getBus();
BusFactory.setDefaultBus(bus);
Endpoint.publish("/MySoapService", new ASoapService());
}
I was able to make the SOAP interface almost available (wsdl information is publicly available already), and
http://localhost:8080/<my project name>/services
event listed out the available services as well their available methods and endpoints/WSDL/Target namespace information.
But when trying to access the SOAP service via SOAP client, I got on the server side errors with the following line of info:
...
Caused by: java.lang.NullPointerException: null
at com.example.ASoapService.getXxx
...
Where
ASoapService
Is Actually an EJB. So I tried instead to replace the above code with the following:
#EJB
ASoapService aSoapService
...
Endpoint.publish("/MySoapService", aSoapService);
During startup of container, I got
Caused by: javax.naming.NameNotFoundException: com.example.ASoapServiceF#com.example.ASoapService not found
By checking the logs, I found a possible reason:
When SOAP part starts up with the following code
Endpoint.publish("/MySoapService", aSoapService);
The EJB Container is not yet ready and thus the lookup of
ASoapService
failed, is such assumption correct? Because normally you should see something like:
[2018-02-02T14:43:57.821+0000] [] [INFO] [AS-EJB-00054] [javax.enterprise.ejb.container] [tid: _ThreadID=1 _ThreadName=main] [timeMillis: 1517582637821] [levelValue: 800] Portable JNDI names for EJB ASoapService: [java:global/<my project name>/ASoapService, java:global/<my project name>/ASoapService!com.example.ASoapService]
during start up, which is not the case for my situation.
I am relatively new to EJB and Glassfish world, can I somehow force EJB container to start first? Or does it actually have anything to do with the starting sequence? How to combine the two together?
Thanks in advance.
You shouldn't be trying to force EJB container to start. Instead, try one of the following:
instead of #EJB ASoapService aSoapService to inject the EJB try #Inject
ASoapService aSoapService - #Inject should wait for the dependencies therefore wait until the EJB is available
run the method Endpoint.publish from an object which is initialized after EJB container is ready, either from a startup singleton EJB or when CDI application scope is initialized: https://rmannibucau.wordpress.com/2015/03/10/cdi-and-startup/

how to mock a service so as to inject it properly

I'm trying to mock a service within a functional test, which is used by another service:
$client = static::createClient();
$stub = $this->createMock(MailService::class);
$stub->method('sendMailToUser')->willReturn(9);
$client->getContainer()->set('belka.auth_bundle.mail_service', $stub);
// the *real* test should start here
if I try to put a die command inside the original sendMailToUser, what I get is the code stop running, although I tried to mock it by returning 9. What's wrong with it? The service I'm testing has the following declaration, then I guessed the injected service was the one wrote above:
belka.auth_bundle.user_handler:
class: Belka\AuthBundle\Handler\UserHandler
arguments:
- '#belka.auth_bundle.user_repository'
- '#belka.auth_bundle.mail_service'
calls:
- [setRequest, ["#request_stack"]]
- [setSettings, ["#belka.auth_bundle.setting_handler"]]
- [setBodyJsonHandler, ["#belka.container_support_bundle.body_json_handler"]]
- [setQuantityHandler, ["#belka.container_support_bundle.quantityhandler"]]
I trust Symfony to create services correctly, and so I won't usually get services to then test within them (I do have a smoke-test that tries to create almost every service though).
So, if you are trying to get a 'belka.auth_bundle.user_handler', I would manually create the Belka\AuthBundle\Handler\UserHandler instance, with your mock as one of the arguments.
For services that are required deeper within the services, there aren't easy ways to mock them (or to get the mock into place), but you can use the service container environments to override them.
For example, I have a service that tests if a Request has come from a bot - but while running functional tests I replace it entirely with a service that always says 'not a bot', by overriding my 'app.bot_detect.detector' service in /config/services_test.yml
# set the default bot detector to a simple fake, always return false
app.bot_detect.detector:
class: App\Services\BotDetectorNeverBot
In the main config/services.yml, the class would really perform the check, but the method in the test environment's BotDetectorNeverBot class always says false.
In the same way, you could override belka.auth_bundle.mail_service in services_test.yml (or config_test.yml) to not send email, and store something instead. You just have to make sure you are including '*_test.yml' files appropriately.

Web Sphere does not commit JPA transaction

Could someone explain to me why Web Sphere Application Server 8.5.5 does not commit (or even begin?) transactions in JTA mode.
I have a dao class annotated with
#Stateless
#TransactionManagement(value = TransactionManagementType.CONTAINER)
And I have a method annotated with #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW). The method simply inserts some entities into the database (if they do not exist yet).
for (MyEntity entity : entities) {
if (validate(entity) { // Programmatic bean validation, returns true when ok
getEntityManager().persist(entity);
}
}
Tests run with Arquillian in Embedded GlassFish, this works perfectly. I can breakpoint stop the code in Eclipse (Luna & Kepler) after this method completes and check the db that there is data. The data used in the test is identical to the data used when deployed on WAS. (Validation errors are shown correctly when tested separately)
According to instructions (http://docs.oracle.com/javaee/6/tutorial/doc/bncij.html)
The code does not include statements that begin and end the transaction...
I probably can't understand this correctly as I have to explicitly wrap the method contents with these:
getEntityManager().getTransaction().begin();
... The persist loop ...
getEntityManager().getTransaction().commit();
...to make the the persisting work.
If I do not do this, there is nothing put in to the database.
I also injected an extra resource for checking the transaction status
#Resource
private TransactionSynchronizationRegistry tsr;
and put this at the end of the method
System.out.println("Transaction status: " + tsr.getTransactionStatus());
getEntityManager().flush();
The output was this:
Transaction status: 0
where 0 = Status.STATUS_ACTIVE
However at the 'flush', an excpetion was thrown:
javax.persistence.TransactionRequiredException:
Exception Description: No transaction is currently active
I spent days trying to figure this out on WAS, while I had it all the time working with the embedded GlassFish (v3) tests.
Both using JavaEE6 (and java 6), though for the debug in Eclipse I have to switch to JavaEE7 + Java7.
Prior to this in another project I have done similar code on GlasFish v4 without any kind of problems.
So could someone clarify me if there are some WAS specific requirements to make this work, or do I just need to do the exact opposite with WAS than the instructions say and how I understand things should work?
I have already the following configuration on WAS:
(admin console)
server > server types > WebSphere application servers > server1 > Container Services > Default Java Persistence API settings > Default JTA data source JNDI name = 'jdbc/kr' (the same as configured in my persistence.xml)
resources > JDBC > JDBC providers > Oracle JDBC Driver (pings ok)
(When this was created) the 'Implementation type' was set to 'connection Pool Datasource', but I also tried this using the 'XA'.
// UPDATE
The getEntityManager-method simply returns the injected entity manager from the super class.
public abstract GenericDAO<T extends GenericEntity> {
#PersistenceContext
private EntityManager em;
...
public EntityManager getEntityManager() {
return this.em;
}
}
// GenericEntity is an interface to force the entities to have the "get all" named query.
The class uses generic dao -pattern (you get the idea from this Single DAO & generic CRUD methods (JPA/Hibernate + Spring), though I have my own modifications as it's an abstract class with default CRUD methods).
When the metdhod getEntityManager is used instead of directly accessing the resource, it's possible to override the entity manager used in the super class if the real dao-class decides to use it's own. => Also the super class has getEntityManager calls and if you override this in implementing class, it will get the same em in the abstract what the actual implementing class uses. Also this method is usable in tests when you can get the em and evict data when needed.
Also this way you can easily add logging when em is accessed (logging interceptor).
// UPDATE 2
Occurred to my mind that there is a separate resource manager used to get remote resources (ejb's). This is so that the location of the ejb is configurable from a property file. However the inner-injection still works within the ejb of this service of mine.
I started thinking that could this cause somehow that the container losses it's transaction handling ability?
Also I noted that there is a #Singleton scoped bean along the path using the actual transactional resources. I could not find a clear explanation on what scopes the beans should be (probably there is not any kind of requirement), but I ended up with understanding that the dao should be #Stateless.
In JavaEE7 this is much more clearer as there is the #Transactional annotation for pointing this.

Symfony2: Mocked service is set in the container but not used by the controller (it still uses the original service)

I'm writing the functional tests for a controller.
It uses a class to import some data from third party websites and to do this I wrote a class that I use into Symfony setting it a service.
Now, in my functional tests, I want to substitute this service with a mocked one, set it in the container and use it in my functional tests.
So my code is the following:
// Mock the ImportDataManager and substitute it in the services container
$mockDataImportManager = $this->getMockBuilder('\AppBundle\Manager\DataImportManager')->disableOriginalConstructor()->getMock();
$client->getContainer()->set('shq.manager.DataImport', $mockDataImportManager);
$client->submit($form);
$crawler = $client->followRedirect();
As I know that between each request the client reboots the kernel and I have to set again the mocked class, I set the mock immediately before the calling to $client->submit.
But this approach seems not working for me and the controller still continue to use the real version of the service instead of the mocked one.
How can I use the mocked class to avoid to call the remote website during my functional test?
If I dump the set mocked service, I can see it is correctly set:
dump($client->getContainer()->get('shq.manager.DataImport'));die;
returns
.SetUpControllerTest.php on line 145:
Mock_DataImportManager_d2bab1e7 {#4807
-__phpunit_invocationMocker: null
-__phpunit_originalObject: null
-em: null
-remotes: null
-tokenGenerator: null
-passwordEncoder: null
-userManager: null
}
But it is not used during the $form->submit($form) call and, instead, is used the original service.
UPDATE
Continuing searching for a solution, I landed on this GitHub page from the Symfony project, where a user asks for a solution to my same problem.
The second call doesn't use the mocked/substituted version of his class, but, instead, the original one.
Is this the correct behavior? So, is it true that I cannot modify the service container on a second call to the client?
Yet, I don't understand why the service is not substituted in the container and I haven't a real solution to that problem.
Anyway I found some sort of workaround, in reality more correct as solution (also if it remains unclear why the service is not substituted and this is a curiosity I'd like to solve - maybe because the $client->submit() method uses the POST method?).
My workaround is a simple test double.
I create a new class in AppBundle/Tests/TestDouble and called it DataImportManagerTestDouble.php.
It contains the unique method used by the controller:
namespace AppBundle\Tests\TestDouble;
use AppBundle\Entity\User;
class DataImportManagerTestDouble
{
public function importData(User $user)
{
return true;
}
}
Then, I instantiate it in the config_test.yml (app/config/config_test.yml) file in the following way:
services:
shq.manager.DataImport:
class: AppBundle\Tests\TestDouble\DataImportManagerTestDouble
This way, during the tests, and only during the tests, the class loaded as service is the TestDouble and not the original one.
So the test pass and I'm (relatively) happy. For the moment, at least.

How to access EJB services from a grails standalone client

I've been having problems to access to my EJB services from a standalone client i've developed on grails 2.0.3. The EJB services are deployed on a glassfish server (Java). I tested this code on a netbeans tester class to access the EJBs:
Properties p = new Properties();
p.put("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory");
p.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming");
p.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl");
p.setProperty("org.omg.CORBA.ORBInitialHost", INTEGRATION_IP);
p.setProperty("org.omg.CORBA.ORBInitialPort", CORBA_PORT);
ctx = new InitialContext(p);
try {
this.admAuth = (AdmAuthenticationRemote) this.ctx.lookup(Tester.AUTHENTICATION_SERVICE_JNDI);
}catch(Exception e){
...
}
This Tester.AUTHENTICATION_SERVICE_JNDI is a variable tha contains the path to the deployed service, in this case something like "java:global/..." that represents the address to the service that is being requested. This way of accessing the services works perfectly from the tester, but when i try to do the same from grails doesn't works. I am able to create the context the same way, but when i invoke the ctx.lookup() call i get an exception:
Message: Lookup failed for 'java:global/...' in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.SerialInitContextFactory,
java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl, java.naming.factory.url.pkgs=com.sun.enterprise.naming}
Cause: javax.naming.NamingException: Unable to acquire SerialContextProvider for SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.SerialInitContextFactory,
java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl, java.naming.factory.url.pkgs=com.sun.enterprise.naming}
[Root exception is java.lang.RuntimeException: Orb initialization erorr]
The main exception is a naming exception, which means that it failed in the ctx.lookup(), but the cause is the orb initialization exception, which has another exception stack:
java.lang.RuntimeException: Orb initialization erorr
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Can not set long field com.sun.corba.ee.impl.orb.ORBDataParserImpl.waitForResponseTimeout to java.lang.Integer
Caused by: java.lang.IllegalArgumentException: Can not set long field com.sun.corba.ee.impl.orb.ORBDataParserImpl.waitForResponseTimeout to java.lang.Integer
I'm really lost here. I've been having a lot of problems to get this going on grails, I had to get all glassfish jars (libs and modules) so it could make the InitialContext() call, but now i'm not sure if this is still a jar problem or a configuration problem or what it is.
I know that IllegalArgumentException occurs when u try to assign incompatible types in java, but i'm not setting anything like that, so i assume its an internal method initialization.
So the question is why is this exception coming up??
Is there another way to invoke my services from grails that works better??
The error is that you're trying to run your web application using the tomcat plugin in grails (using the command grails run-app). The problem is that when you try to create the InitialContext (com.sun.enterprise.naming.SerialInitContextFactory) groovy gives you an error casting some types if you're using the client libraries for GF 3.1. (I know that this is the problem, but I really don't know the reason for this. Because in theory this should work)
If you generate the .war file and you deploy in an AppServer, you can connect to your EJBs without problems. And if you deploy it on another GF server you don't have to import any of the client jars.
This will work perfect on production, the only problem is that you must compile and deploy your app on the GF server with every little change, and this is a bit annoying in development.
If you want to work outside of GF and using the command "grails run-app", you must modify two of the .jar GF 3.1 on your remote server, where you have the grails application:
1- The jar file $GLASSFISH_HOME/modules/glassfish-corba-omgapi.jar
You should search in the web the class com.sun.corba.ee.spi.orb.ParserImplBase, and modify this part
Field field = getAnyField(name);
field.setAccessible(true);
field.set(ParserImplBase.this, value);
for this
if (name.equalsIgnoreCase("waitForResponseTimeout")) {
Object newValue = new Long(1800000);
Field field = getAnyField(name);
field.setAccessible(true);
field.set(ParserImplBase.this, newValue);
} else {
Field field = getAnyField(name);
field.setAccessible(true);
field.set(ParserImplBase.this, value);
}
this should resolve the java.lang.IllegalArgumentException
2- The jar file $GLASSFISH_HOME/modules/glassfish-corba-orb.jar
you must delete the javax.rmi.CORBA.PortableRemoteObjectClass class of this library, because this class have conflicts with one used by the grails plugin
PS:
If you do not want to have the GF client jars in your grails application, you can add to the classpath of your client machine the following libraries
$GLASSFISH_HOME/modules/ejb-container.jar
$GLASSFISH_HOME/modules/ejb.security.jar
$GLASSFISH_HOME/modules/management-api.jar
If you use the grails console with the grails run-app command, must modify the configuration file $GRAILS_HOME/conf/groovy-starter.conf whit this
load $GLASSFISH_HOME/modules/ejb-container.jar
load $GLASSFISH_HOME/modules/ejb.security.jar
load $GLASSFISH_HOME/modules/management-api.jar