My application callback start Supervisor that conflicts with unit tests.
With that callback I am getting something like {:error, {:already_started, #PID<0.258.0>}} when I try to run unit test because my processes are already started.
Can I execute Application callback only for :dev and :prod, keeping :test environment clean of startup code?
I am looking for something like this:
def application do
[
applications: [:logger],
mod: {MyApplication, [], only: [:dev, :prod]}
]
only: [:dev, :prod] - this is missing piece
I do not know if this is the correct way to handle testing in this case, but here's how you can do what you're asking for:
In mix.exs:
def application do
rest = if(Mix.env == :test, do: [], else: [mod: {MyApp, []}])
[applications: [:logger]] ++ rest
end
For the demo below, I added the following to MyApp.start/2:
IO.puts "starting app..."
Demo:
$ MIX_ENV=dev mix
starting app...
$ MIX_ENV=prod mix
starting app...
$ MIX_ENV=test mix # no output
One solution is to kill that process before the test suite runs. For example you could do something like the below:
setup do
Process.exit(pid, :kill)
end
test "do something..." do
assert 1 == 1
end
This will ensure that before the test is run, that process is already killed.
Related
I recently started using Dash for developing web applications. In order to improve the quality of my code, I began writing unit tests for these applications.
I followed the tutorials in the Dash documentation (https://dash.plotly.com/testing) and installed ChromeDriver and all other needed dependencies. When running the example unit tests from the Dash Documentation, every test that used dash_duo failed with a TestingTimeoutError. The unit tests that don't use dash_duo run just fine, but every single test that uses dash_duo fails with a TestingTimeoutError.
Eventually, I wrote a super simple test:
import dash
from dash import html
def test_001_test_dash_duo(dash_duo):
x = 2
assert x == 2
Even this test fails with a timeout error:
msg = 'expected condition not met within timeout'
def until(
wait_cond, timeout, poll=0.1, msg="expected condition not met within timeout"
): # noqa: C0330
res = wait_cond()
logger.debug(
"start wait.until with method, timeout, poll => %s %s %s",
wait_cond,
timeout,
poll,
)
end_time = time.time() + timeout
while not res:
if time.time() > end_time:
> raise TestingTimeoutError(msg)
E dash.testing.errors.TestingTimeoutError: expected condition not met within timeout
Any ideas of what is needed to get Dash unit tests using dash_duo to not fail by timeout?
I have been trying like mad to use config based mocks on Elixir. I have defined my mocked module and placed it inside a ".ex" file under the "test/" directory. Then whenever I run "mix test" it fails to load the module. However if I move the mock under "lib/" then everything works just fine. So I was wondering if there is something I'm missing on my configuration and file structure OR if there is a way to tell "mix" to look for source files in another directory in addition to "lib/".
File structure:
my_app/
|
+ -- lib/
| my_lib.ex
| my_service.ex
|
+ ---test/
| test_helper.ex
| my_service_mock.ex
| my_lib_test.exs
|
+----config/
config.exs
test.exs
prod.exs
dev.exs
config/dev.exs
import Config
config :my_app, my_service: MyApp.MyService
config/test.exs
import Config
config :my_app, my_service: MyApp.MyServiceMock
my_lib.ex
defmodule MyLib do
#my_service Application.get_env(:my_app, :my_service)
def do_something, do: #my_service.do_something_else
end
my_service.ex
defmodule MyApp.MyService do
def do_something_else, do: { :ok, "Running business task" }
end
my_service_mock.ex
defmodule MyApp.MyServiceMock do
def do_something_else, do: { :ok, "Faking business task" }
end
my_lib_test.ex
defmodule MyApp.MyLibTest do
use ExUnit.Case
alias MyApp.MyLib
test "MyList.do_something/0 should do it's thing" do
assert { :ok, "Faking business task" } = MyLib.do_something
end
end
The command "mix test" fails with the following error:
== Compilation error in file lib/my_lib.ex ==
** (UndefinedFunctionError) function MyApp.MyServiceMock.do_something_else/0 is undefined (module MyApp.MyServiceMock is not available)
MyApp.MyServiceMock.do_something_else()
lib/my_lib.ex:3: (module)
(stdlib 3.14) erl_eval.erl:680: :erl_eval.do_apply/6
I'm running elixir 1.11.2.
Well, I finally found out the solution on this post on Elixir Forum: https://elixirforum.com/t/load-module-during-test/7400
It turns out there is a variable in the "Mix.Project" that specifies the paths for the sources:
So in my "mix.exs" I did the following:
def project do
[
...
elixirc_paths: elixirc_paths(Mix.env),
...
]
end
defp elixirc_paths(env_name) do
case env_name do
:test -> ["lib", "test/mockery"]
_ -> ["lib"]
end
end
Of course I added the directory "test/mockery/" and moved "MyApp.MyServiceMock" there...
We have a unit test suite, written in RSpec. We have some failed tests that is a lot actually.
What I am looking for is a script or a magic command, to mark all the failed tests as skipped, so I don't have to go over them one by one and mark them as skipped.
I found this awesome script that do exactly what I need:
https://gist.github.com/mcoms/77954d191bde31d4677872d2ab3d0cd5
Copying the contents here, in case the original gist is deleted:
# frozen_string_literal: true
class CustomFormatter
RSpec::Core::Formatters.register self, :example_failed
def initialize(output)
#output = output
end
def example_failed(notification)
tf = Tempfile.new
File.open(notification.example.metadata[:file_path]) do |f|
counter = 1
while (line = f.gets)
if counter == notification.example.metadata[:line_number]
line.sub!('it', 'skip')
line.sub!('scenario', 'skip')
#output << line
end
tf.write line
counter += 1
end
end
tf.close
FileUtils.mv tf.path, notification.example.metadata[:file_path]
end
end
Should be relatively straightforward. RSpec lists failing specs like this
rspec ./spec/models/user.rb:67 # User does this thing
rspec ./spec/models/post.rb:13 # Post does another thing
rspec ./spec/models/rating.rb:123 # Rating does something else entirely
File name and line number point to the opening line of the test, the one with it ... do.
Write a script that
extracts file names and line numbers from the failure output
opens those files, goes to the specified line
and replaces it with xit.
I'm totally ok with writing a "normal" test capturing the IO for this.
Would just like to know if it is possible to use Doctest.
An example would be:
defmodule CLI do
#doc """
Politely says Hello.
## Examples
iex> CLI.main([])
"Hello dear person." # this would be the expected IO output
"""
def main(args) do
IO.puts "Hello dear person."
end
end
defmodule CLITest do
use ExUnit.Case
doctest CLI
end
You can use the same function as you'd use in a normal test: ExUnit.CaptureIO.capture_io. This might not be a function suited for doctests though when you add more functionality to the function.
defmodule CLI do
#doc """
Politely says Hello.
## Examples
iex> import ExUnit.CaptureIO
iex> capture_io(fn -> CLI.main([]) end)
"Hello dear person.\\n"
"""
def main(args) do
IO.puts "Hello dear person."
end
end
$ mix test
.
Finished in 0.03 seconds
1 test, 0 failures
I have some code written in django/python. The principal is that the HTTP Response is a generator function. It spits the output of a subprocess on the browser window line by line. This works really well when I am using the django test server. When I use the real server it fails / basically it just beachballs when you press submit on the page before.
#condition(etag_func=None)
def pushviablah(request):
if 'hostname' in request.POST and request.POST['hostname']:
hostname = request.POST['hostname']
command = "blah.pl --host " + host + " --noturn"
return HttpResponse( stream_response_generator( hostname, command ), mimetype='text/html')
def stream_response_generator( hostname, command ):
proc = subprocess.Popen(command.split(), 0, None, subprocess.PIPE, subprocess.PIPE, subprocess.PIPE )
yield "<pre>"
var = 1
while (var == 1):
for line in proc.stdout.readline():
yield line
Anyone have any suggestions on how to get this working with on the real server? Or even how to debug why it is not working?
I discovered that the generator function is actually running but it has to complete before the httpresponse throws up a page onscreen. I don't want to have to wait for it to complete before the user sees output. I would like the user to see output as the subprocess progresses.
I'm wondering if this issue could be related to something in apache2 rather than django.
#evolution did you use gunicorn to deploy your app. If yes then you have created a service. I am having a similar kind of issue but with libreoffice. As much as I have researched I have found that PATH is overriding the command path present on your subprocess. I did not have a solution till now. If you bind your app with gunicorn in terminal then your code will also work.