If Else Statements in Cypress - if-statement

I have problem with if-else in cypress. I have 3 questions. And the questions is not consistent, always change. So I wanted to set an answer for specific question. The error is, whenever the question B appear, cypress cant skip the question A and it stops. Cypress didn't read my if and else, and I can't understand, why it happens.
Here is my code:
let disabled = null;
cy.contains("What is your dream car brand?").then(() => {
disabled = true;
cy.get("input[name=securityQues]").type("Ferrari");
cy.end();
});
if (disabled == false) {
cy.contains("What is your favorite movie?").then(() => {
cy.get("input[name=securityQues]").type("Forrest Gump");
});
} else {
cy.contains("Where is your favourite place to vacation?").then(() => {
cy.get("input[name=securityQues]").type("Japan");
});
}

Try to use another way to find your main element with a question. .contains in this case will not work. Try to use .get('css-selector') and then you can appropriate access that element. See an example here: https://github.com/valor-software/ngx-bootstrap/blob/development/cypress/support/timepicker.po.ts#L117

Doing conditional testing on user interfaces is completely discouraged as your tests will never be deterministic. I'd tell you to try to find a workaround to your way of testing what you want to test.
Here there is a nice read on that https://docs.cypress.io/guides/core-concepts/conditional-testing.html#The-problem

Related

Should a unit test include the converse of what you are testing?

Imagine that we have a game with different types of servers in it to represent different countries. Now, lets say the game only allows users to "buddy" other players that are only in the same server that their account is attached to. As a developer, I am tasked to write a test case only to see if the feature for users to "buddy" one another works. However, here is where my dilemma lies, since I only have to test whether or not a user can "buddy" someone else, do I have to also test if the user cannot add users from another server in this same test case, or should that be written in another test case totally separate from the current one that I am writing?
Yes, you should.
The first test you're describing, where you just test if they can have a buddy, is known as Happy Path testing. You're testing that it works with no exceptions or errors or weird use cases.
Happy Path testing is a good start. Everything else is where the real fun begins.
In your case, the things that come to mind are...
What if they buddy with an invalid buddy?
What if they buddy with someone who is already their buddy?
What if they buddy with something that's not a user?
How these should be organized is a matter of taste. Ideally they're all separate tests. This makes the purpose of each test clear via the test name, and it avoids interactions and dependencies between the tests. Here's a sketch in no particular language.
describe add_buddy {
test happy_path {
assert user.add_buddy(valid_buddy);
assert user.buddies.contains(valid_buddy);
}
test buddy_on_another_server {
buddies = user.buddies;
assert !buddies.contains(invalid_buddy);
assertThrows {
user.add_buddy(invalid_buddy);
} InvalidBuddy;
assert buddies == user.buddies, "buddy list unchanged";
}
test buddy_with_non_user {
buddies = user.buddies;
assertThrows {
user.add_buddy(non_user);
} ArgumentError;
assert buddies == user.buddies, "buddy list unchanged";
}
test buddy_an_existing_buddy {
assert user.add_buddy(valid_buddy);
# Should this return true? False? An exception?
# You have to decide. I decided false.
assert !user.add_buddy(valid_buddy);
# The buddy is only on the list once.
assert user.buddies.numContains(valid_buddy) == 1;
}
}
Things like user and valid_buddy can be created in the setup routine, or better they're available via a fixtures generator such as Factory_Girl.

Ember.js: How do I prevent this property observer from being hit more than once?

In my controller I have a property and a function observing that property. When the property changes once, the observer is being hit three times - twice with the old data and once with the new data. You can see it happen in the console window of the jsbin I just created:
jsbin
Usage: click one of the books (not the first one), and look in the console window at the results.
In my actual app, the work to be performed requires an asynchronous download. This observer is downloading the wrong content twice and the correct content once because of the three hits. Making this problem more obvious is that the asynchronous responses do not come back in sequence. :)
An interim solution has been to schedule the download code to run later.
I'm not sure why this is happening, but the guides give a way to fix it. They suggest something like this:
bookObserver: function() {
Ember.run.once(this, 'bookWasChanged');
}.observes('book'),
bookWasChanged: function() {
// This will only run once per run loop
}
Personally, I always make the assumption that observers could fire even when I don't want them to. For instance, with this code, I would do the following:
bookObserver: function() {
var book = this.get('book');
var lastBook = this.get('lastBook');
if (book !== lastBook) {
this.set('lastBook', book);
doSomethingWithBook(book);
}
}.observes('book')
This way, even if Ember calls your observer 1000 times, you're only going to do the work you want when the book actually changes.

theintern: 1 test failure causes all tests to fail - is this expected behaviour?

I have what I think are standard functional tests set up for the intern and I can get them to pass consistently in several browsers. I'm still evaluating if it makes sense to use the intern for a project so I'm trying to see what happens when tests fail, and currently if I make one test fail, it always seems to cause all the tests in the suite to fail.
My tests look a bit like :
registerSuite({name : 'demo',
'thing that works' : function () {
return this.remote.get('http://foo.com')
.waitForCondition("typeof globalThing !== 'undefined'", 5000)
.elementById('bigRedButton')
.clickElement()
.end()
.eval('jsObj.isTrue()')
.then(function(result){
assert.isTrue(result);
})
.end(); // not sure if this necessary...
},
'other thing that works': function() {
// more of the same
}
});
I'm going to try and debug to figure out this for myself, but I was just wondering if anyone knows if this is expected behaviour (1 test failure causes whole test suite to fail, and report that all tests in suite have failed), or whether its more likely that my set up is wrong and I have bad interactions between the promises or something?
Any help would be awesome, and happy to provide any more info if helpful :)
Thanks!
I ran into the exact same issue a few weeks ago and created a ticket on github for the issue: https://github.com/theintern/intern/issues/46.
It's tagged 'needs-triage' at the moment, I've no idea of what it means.

Where should I place this code in MVC?

My code works perfectly, BUT. Whats the best practice in this case?
Here is the code that is important.
This is in the controller.
private IProductRepository repository;
[HttpPost]
public ActionResult Delete(int productId) {
Product prod = repository.Products.FirstOrDefault(p => p.ProductID == productId);
if (prod != null) {
repository.DeleteProduct(prod);
TempData["message"] = string.Format("{0} was deleted", prod.Name);
}
return RedirectToAction("Index");
}
This is the repository (both Interface etc)
public interface IProductRepository {
IQueryable<Product> Products { get; }
void SaveProduct(Product product);
void DeleteProduct(Product product);
}
And here comes the repository..... (the part that is important) I want to point out though... that this is not a fakeclass as is pretty clear. The testing is done on fakeclasses.
private EFDbContext context = new EFDbContext();
public IQueryable<Product> Products {
get { return context.Products; }
}
public void DeleteProduct(Product product) {
context.Products.Remove(product);
context.SaveChanges();
}
Well first question:
When doing testing on this, I will make a two TestMethods on the Controller in "ControllerTest". "Can_delete_valid_product" and "Cannot_delete_invalid_product". Is there any point in having a testclass for the repository? Like "RepositoryTest", afterall the controller tests if the deletefunction works no need to test it twice right?
Second question:
In this I test in the controller if the product exists, before trying to delete it. If it exists I call the deletefunction in the repository. This means that there should never be the posibility of an exception. BUT you could still create an exception in the repository if you send down null. (which cant happen here but you could still do it if you forget to check if null). Question is if the testing if product exists should be done in the repository instead?
I prefer to keep logic out of the controller for the most part. A test of the controller action verifies if the repository is called, but the repository itself is mocked in that test. I would make the repository responsible for handling null checking.
Personally I create separate tests for my repositories/data access to ensure that it works properly. The controllers themselves would be tested with mocks.
Actually it's entirely possible (just maybe not that likely) that someone could delete a product just as someone else is trying to delete it. In this case you probably don't care/need to know that someone did though so I would probably just swallow that exception in the repository (though I would log it first). In terms of null checking/defensive programming that's entirely a personal choice. Some people leave checks like that to the entry points of the system where as others will build a layered defense that has additional checks throughout the code. The problem is that these checks can get quite ugly which is a big part of why I wish Code Contracts would gain more traction.
This means that there should never be the posibility of an exception. BUT you could still create an exception in the repository if you send down null. (which cant happen here but you could still do it if you forget to check if null).
Or if it's deleted after you check it exists but before you delete it. Or if you lose connection to the repository (or will the method never return in this case?). You can't avoid exceptions in this way.

Best practices in unit-test writing

I have a "best practices" question. I'm writing a test for a certain method, but there are multiple entry values. Should I write one test for each entry value or should I change the entryValues variable value, and call the .assert() method (doing it for all range of possible values)?
Thank you for your help.
Best regards,
Pedro Magueija
edited: I'm using .NET. Visual Studio 2010 with VB.
If one is having to write many tests which vary only in initial input and final output one should use a data driven test. This allows you to define the test once along with a mapping between input and output. The unit testing framework will then interpret it as being one test per case. How to actually do this depends on which framework you are using.
It's better to have separate unit tests for each input/output sets covering the full spectrum of possible values for the method you are trying to test (or at least for those input/output sets that you want to unit test).
Smaller tests are easier to read.
The name is part of the documentation of the test.
Separate methods give a more precise indication of what has failed.
So if you have a single method like:
void testAll() {
// setup1
assert()
// setup2
assert()
// setup3
assert()
}
In my experience this gets very big very quickly, and so becomes hard to read and understand, so I would do:
void testDivideByZero() {
// setup
assert()
}
void testUnderflow() {
// setup
assert()
}
void testOverflow() {
// setup
assert()
}
Should I write one test for each entry
value or should I change the
entryValues variable value, and call
the .assert() method (doing it for all
range of possible values)?
If you have one code path typically you do not test all possible inputs. What you usually want to test are "interesting" inputs that make good exemplars of the data you will get.
For example if I have a function
define add_one(num) {
return num+1;
}
I can't write a test for all possible values so I may use MAX_NEGATIVE_INT, -1, 0, 1, MAX_POSITIVE_INT as my test set because they are a good representatives of interesting values I might get.
You should have at least one input for every code path. If you have a function where every value corresponds to a unique code path then I would consider writing a tests for the complete range of possible values. And example of this would be a command parser.
define execute(directive) {
if (directive == 'quit') { exit; }
elsif (directive == 'help') { print help; }
elsif (directive == 'connect') { intialize_connection(); }
else { warn("unknown directive"); }
}
For the purpose of clarity I used elifs rather than a dispatch table. I think this make it's clear that each unique value that comes in has a different behavior and therefore you would need to test every possible value.
Are you talking about this difference?
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
}
- (void) testSomething2
{
[foo callBarWithValue:y];
assert…
}
vs.
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
[foo callBarWithValue:y];
assert…
}
The first version is better in that when a test fails, you’ll have better idea what does not work. The second version is obviously more convenient. Sometimes I even stuff the test values into a collection to save work. I usually choose the first approach when I might want to debug just that single case separately. And of course, I only choose the latter when the test values really belong together and form a coherent unit.
you have two options really, you don't mention which test framework or language you are using so one may not be applicable.
1) if your test framework supports it use a RowTest, MBUnit and Nunit support this if you're using .NET this would allow you to put multiple attributes on your method and each line would be executed as a separate test
2) If not write a test per condition and ensure you give it a meaningful name so that if (when) the test fails you can find the problem easily and it means something to you.
EDIT
Its called TestCase in Nunit Nunit TestCase Explination