I have a workspace consisting of both lib and bin crates. Running cargo test --lib skips the binary crates.
--bins and --lib are not exclusive, you can use both and it'll run the tests in both categories:
$ cargo test --bins
Finished dev [unoptimized + debuginfo] target(s) in 0.01s
Running target/debug/deps/foo-c982c1477aaaf33d
running 1 test
test test_bins ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
$ cargo test --lib
Finished dev [unoptimized + debuginfo] target(s) in 0.02s
Running target/debug/deps/foo-532806c187f0c643
running 1 test
test test_lib ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
$ cargo test --bins --lib
Finished dev [unoptimized + debuginfo] target(s) in 0.02s
Running target/debug/deps/foo-532806c187f0c643
running 1 test
test test_lib ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/foo-c982c1477aaaf33d
running 1 test
test test_bins ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
The test project does have an integration test which gets run if no target is provided:
$ cargo test
Compiling foo v0.1.0 (foo)
Finished dev [unoptimized + debuginfo] target(s) in 0.27s
Running target/debug/deps/foo-532806c187f0c643
running 1 test
test test_lib ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/foo-c982c1477aaaf33d
running 1 test
test test_bins ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/test_foo-79419bfea3135abf
running 1 test
test test_integration ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests foo
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Related
I've this alert which I try to cover by unit tests:
- alert: Alert name
annotations:
summary: 'Summary.'
book: "https://link.com"
expr: sum(increase(app_receiver{app="app_name", exception="exception"}[1m])) > 0
for: 5m
labels:
severity: 'critical'
team: 'myteam'
This test scenario is failing each time until the for: 5m will be commented in the code. In this case, it'll be successful.
rule_files:
- ./../../rules/test.yml
evaluation_interval: 1m
tests:
- interval: 1m
input_series:
- series: 'app_receiver{app="app_name", exception="exception"}'
values: '0 0 0 0 0 0 0 0 0 0'
- series: 'app_receiver{app="app_name", exception="exception"}'
values: '0 0 0 0 0 10 20 40 60 80'
alert_rule_test:
- alertname: Alert name
eval_time: 5m
exp_alerts:
- exp_labels:
severity: 'critical'
team: 'myteam'
exp_annotations:
summary: 'Summary.'
book: "https://link.com"
The result of this test:
FAILED:
alertname:Alert name, time:5m,
exp:"[Labels:{alertname=\"Alert name\", severity=\"critical\", team=\"myteam\"} Annotations:{book=\"https://link.com\", summary=\"Summary.\"}]",
got:"[]
Can someone please help me fix this test and explain a failure reason?
I believe that all i'd need to do to resolve this is to set SSM inside of Image Builder to use my proxy with the environment variable -> HTTP_PROXY = HOST:IP
for example, I can run this on another server where all traffic is directed through the proxy:
curl -I --socks5-hostname socks.local:1080 https://s3.amazonaws.com/aws-cli/awscli-bundle.zip -o awscli-bundle.zip
Here's what Image builder is trying to do and failing (before any of the image builder components are ran):
SSM execution '68711005-5dc4-41f6-8cdd-633728ca41da' failed with status = 'Failed' in state = 'BUILDING' and failure message = 'Step fails when it is verifying the command has completed. Command 76b55646-79bb-417c-8bb6-6ee01f9a76ff returns unexpected invocation result: {Status=[Failed], ResponseCode=[7], Output=[ ----------ERROR------- + sudo systemctl stop ecs + curl https://s3.amazonaws.com/aws-cli/awscli-bundle.zip -o /tmp/imagebuilder_service/awscli-bundle.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0 0 0 0 0 0 0 0 0 ...'
These env vars are all that should be needed, the problem is that i see no way to add them (similarly to how you would in CodeBuild):
http_proxy=http://hostname:port
https_proxy=https://hostname:port
no_proxy=169.254.169.254
SSM Agent does not read environment variables from host, you would need to provide the environment variables in the file below and restart ssm agent
On Ubuntu Server instances where SSM Agent is installed by using a snap: /etc/systemd/system/snap.amazon-ssm-agent.amazon-ssm-agent.service.d/override.conf
On Amazon Linux 2 instances: /etc/systemd/system/amazon-ssm-agent.service.d/override.conf
On other operating systems: /etc/systemd/system/amazon-ssm-agent.service.d/amazon-ssm-agent.override
[Service]
Environment="http_proxy=http://hostname:port"
Environment="https_proxy=https://hostname:port"
Environment="no_proxy=169.254.169.254"
Reference: https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-proxy-with-ssm-agent.html#ssm-agent-proxy-systemd
I am running tests on a project I've been assigned to. Everything is triggered by calling tox.
Default tests run with nose, which adds a coverage report, this is the command that tox calls:
django-admin test -s
and settings file has this configuration for nose:
NOSE_ARGS = [
'--with-coverage',
'--cover-erase',
'--cover-package=app_name',
'--cover-inclusive'
]
This is the report that's shown while running nose with tox:
Name Stmts Miss Cover
---------------------------------------------------------------------------
app_name/__init__.py 0 0 100%
app_name/apps.py 3 0 100%
app_name/apps_settings.py 12 2 83%
app_name/base.py 118 27 77%
app_name/choices.py 18 0 100%
app_name/constants.py 6 0 100%
app_name/exceptions.py 10 0 100%
app_name/helpers/__init__.py 0 0 100%
app_name/helpers/util.py 20 10 50%
app_name/management/__init__.py 0 0 100%
app_name/migrations/0001_initial.py 9 0 100%
app_name/migrations/__init__.py 0 0 100%
app_name/mixins.py 6 0 100%
app_name/models.py 64 4 94%
app_name/permissions.py 7 3 57%
app_name/serializers/__init__.py 0 0 100%
app_name/serializers/address_serializer.py 7 0 100%
app_name/serializers/base_response_serializer.py 7 0 100%
app_name/serializers/body_request_user_serializer.py 14 0 100%
app_name/serializers/contact_serializer.py 4 0 100%
app_name/serializers/file_serializer.py 11 2 82%
app_name/serializers/iban_serializer.py 3 0 100%
app_name/serializers/identification_serializer.py 11 2 82%
app_name/serializers/payment_account_serializer.py 3 0 100%
app_name/serializers/transfer_serializer.py 20 10 50%
app_name/services/__init__.py 0 0 100%
app_name/services/authentication_service.py 7 0 100%
app_name/services/document_service.py 23 9 61%
app_name/services/user_service.py 37 21 43%
app_name/services/webhook_service.py 26 7 73%
app_name/storage_backends.py 10 0 100%
app_name/views/__init__.py 0 0 100%
app_name/views/webhook_view.py 25 8 68%
---------------------------------------------------------------------------
TOTAL 481 105 78%
----------------------------------------------------------------------
Ran 5 tests in 4.615s
But, if after it I run coverage report this is shown:
Name Stmts Miss Cover
-----------------------------------------------------------------------------
app_name/__init__.py 0 0 100%
app_name/apps.py 3 0 100%
app_name/apps_settings.py 12 2 83%
app_name/base.py 118 27 77%
app_name/choices.py 18 0 100%
app_name/constants.py 6 0 100%
app_name/decorators.py 10 7 30%
app_name/exceptions.py 10 0 100%
app_name/helpers/__init__.py 0 0 100%
app_name/helpers/util.py 20 10 50%
app_name/management/__init__.py 0 0 100%
app_name/management/commands/__init__.py 0 0 100%
app_name/management/commands/generate_uuid.py 9 4 56%
app_name/migrations/0001_initial.py 9 0 100%
app_name/migrations/__init__.py 0 0 100%
app_name/mixins.py 6 0 100%
app_name/models.py 64 4 94%
app_name/permissions.py 7 3 57%
app_name/serializers/__init__.py 0 0 100%
app_name/serializers/address_serializer.py 7 0 100%
app_name/serializers/base_response_serializer.py 7 0 100%
app_name/serializers/body_request_token_serializer.py 4 0 100%
app_name/serializers/body_request_user_serializer.py 14 0 100%
app_name/serializers/contact_serializer.py 4 0 100%
app_name/serializers/document_serializer.py 7 0 100%
app_name/serializers/file_serializer.py 11 2 82%
app_name/serializers/files_serializer.py 4 0 100%
app_name/serializers/iban_serializer.py 3 0 100%
app_name/serializers/identification_serializer.py 11 2 82%
app_name/serializers/payment_account_serializer.py 3 0 100%
app_name/serializers/transfer_serializer.py 20 10 50%
app_name/serializers/user_information_serializer.py 7 0 100%
app_name/services/__init__.py 0 0 100%
app_name/services/account_service.py 62 45 27%
app_name/services/authentication_service.py 7 0 100%
app_name/services/document_service.py 23 9 61%
app_name/services/method_service.py 23 15 35%
app_name/services/user_service.py 37 21 43%
app_name/services/webhook_service.py 26 7 73%
app_name/storage_backends.py 10 0 100%
app_name/tests/__init__.py 0 0 100%
app_name/tests/apps/__init__.py 0 0 100%
app_name/tests/apps/apps_test.py 9 0 100%
app_name/tests/helpers/__init__.py 0 0 100%
app_name/tests/helpers/helpers_test.py 7 0 100%
app_name/tests/services/__init__.py 0 0 100%
app_name/tests/services/authentication_service_test.py 8 0 100%
app_name/tests/services/document_service_test.py 13 0 100%
app_name/tests/services/user_service_test.py 13 0 100%
app_name/urls/__init__.py 0 0 100%
app_name/urls/webhook_url.py 7 2 71%
app_name/views/__init__.py 0 0 100%
app_name/views/webhook_view.py 25 8 68%
-----------------------------------------------------------------------------
TOTAL 664 178 73%
Now, as you can see, certain files were ignored by nose report, but shown by coverage, like app_name/services/account_service.py. And since that file contains feature code it should be shown on report.
The interesting thing here is that, as far as I know, both libraries: nose and coverage are generating their reports from the same report file .coverage
I guess this is a default nose behavior. I'm not very familiar with nose so perhaps anyone can tell me why this difference of behavior happens.
I am trying to understand what series values stand for in Prometheus unit test.
The official doc does not provide any info.
For example, fire an alert if any instance is down over 10 seconds.
alerting-rules.yml
groups:
- name: alert_rules
rules:
- alert: InstanceDown
expr: up == 0
for: 10s
labels:
severity: critical
annotations:
summary: "Instance {{ $labels.instance }} down"
description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 10 seconds."
alerting-rules.test.yml
rule_files:
- alerting-rules.yml
evaluation_interval: 1m
tests:
- interval: 1m
input_series:
- series: 'up{job="prometheus", instance="localhost:9090"}'
values: '0 0 0 0 0 0 0 0 0 0 0 0 0 0 0'
alert_rule_test:
- eval_time: 10m
alertname: InstanceDown
exp_alerts:
- exp_labels:
severity: critical
instance: localhost:9090
job: prometheus
exp_annotations:
summary: "Instance localhost:9090 down"
description: "localhost:9090 of job prometheus has been down for more than 10 seconds."
Originally, I thought because of interval: 1m, which is 60 seconds, and there are 15 numbers, 60 / 15 = 4s, so each value stands for 4 seconds (1 means up, 0 means down).
However, when the values are
values: '0 0 0 0 0 0 0 0 0 0 0 0 0 0 0'
or
values: '1 1 1 1 1 1 1 1 1 0 0 0 0 0 0'
Both will pass the test when I run promtool test rules alerting-rules.test.yml.
But below will fail:
values: '1 1 1 1 1 1 1 1 1 1 0 0 0 0 0'
So my original thought each number stands for 4s is wrong. If my assumption is correct, then only when less three 0s will fail the test.
What do series values stand for in Prometheus unit test?
Your assumption is incorrrect. The number in the values doesn't correspond at the number of value in the interval but which value the series will have after each interval. For example:
values: 1 1 1 1 1 1
# 1m 2m 3m 4m 5m 6m
In your example, since you evaluate the value at 10min (with eval_time) the evaluation will be based on the tenth value in the values. Since you check if up==0, when you change the tenth value to 1 it will fail because the alert will not be trigger as excepted.
I'm trying to insert to a table with a query in an EMR cluster on AWS. The table is creating correctly, and a colleague can run the exact same code that I'm using and it won't fail. However, when I try to run the code, I get failures in Map1 that make the entire job fail with the error below for the query below.
Can someone help me figure out why my job is failing when I run it, but my friend can run it without issue? I've been staring at this for the entire day and can't get past it.
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 1 container RUNNING 13 0 0 13 40 1
Map 3 .......... container SUCCEEDED 1 1 0 0 0 0
Map 5 .......... container SUCCEEDED 1 1 0 0 0 0
Map 7 .......... container SUCCEEDED 1 1 0 0 0 0
Map 8 .......... container SUCCEEDED 1 1 0 0 0 0
Reducer 2 container INITED 6 0 0 6 0 0
Reducer 4 ...... container SUCCEEDED 2 2 0 0 0 0
Reducer 6 ...... container SUCCEEDED 2 2 0 0 0 0
Reducer 9 ...... container SUCCEEDED 2 2 0 0 0 0
----------------------------------------------------------------------------------------------
VERTICES: 07/09 [========>>------------------] 34% ELAPSED TIME: 132.71 s
----------------------------------------------------------------------------------------------
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1544915203536_0453_2_07, diagnostics=[Task failed, taskId=task_1544915203536_0453_2_07_000009, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1544915203536_0453_2_07_000009_0:java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.IllegalArgumentException: [
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.IllegalArgumentException: [VALUE] BINARY is not in the store:
at
I think in the error :
[tuning_event_start_ts] BINARY is not in the store
The most important point is : not in the store
Check your query (only the select without insert).
Wich table contains TUNING_EVENT_START_TS ?
So it turns out that the vectorization was the issue. These were the settings that would be activated at the beginning of the session.
set hive.vectorized.execution.enabled = true;
set hive.vectorized.execution.reduce.enabled = true;
By not activating this it was able to run slower but successfully. It seems hive does not like the timestamp value. At the bottom of the below wiki is a limitations piece. It definitely works without these options set.
https://cwiki.apache.org/confluence/display/Hive/Vectorized+Query+Execution
In summary, timestamps and vectorization don't like each other in hive.... But only sometimes...