why murano-test-runner connecte to keystone? - unit-testing

I add some unit tests for my package "kubernetes-cluster" follow Tomcat Package(see: https://github.com/openstack/murano-apps/blob/master/Tomcat/package/Classes/TomcatTest.yaml).
and then, run it with command(without --config-file,--os-auth-url, without murano.conf in path /etc/murano/):
murano-test-runner -v io.murano.apps.docker.kubernetes.KuryrCluster io.murano.test.KuryrClusterTest -l /my-packages and core-library/local/path
I got the error message:
2017-03-16 07:39:49.978 | 2017-03-16 07:42:50.773 1697 ERROR murano.cmd.test_runner [-] Command failed: 'NoneType' object has no attribute 'replace'
2017-03-16 07:39:49.978 | Traceback (most recent call last):
2017-03-16 07:39:49.978 | File "/home/jenkins/workspace/murano-programming-language-unit-test/.tox/murano-test-runner/lib/python2.7/site-packages/murano/cmd/test_runner.py", line 374, in main
2017-03-16 07:39:49.978 | exit_code = test_runner.run_tests()
2017-03-16 07:39:49.978 | File "/home/jenkins/workspace/murano-programming-language-unit-test/.tox/murano-test-runner/lib/python2.7/site-packages/murano/cmd/test_runner.py", line 213, in run_tests
2017-03-16 07:39:49.978 | ks_opts = self._validate_keystone_opts(self.args)
2017-03-16 07:39:49.978 | File "/home/jenkins/workspace/murano-programming-language-unit-test/.tox/murano-test-runner/lib/python2.7/site-packages/murano/cmd/test_runner.py", line 195, in _validate_keystone_opts
2017-03-16 07:39:49.979 | ks_opts[param] = ks_opts[param].replace('v2.0', 'v3')
2017-03-16 07:39:49.979 | AttributeError: 'NoneType' object has no attribute 'replace'
2017-03-16 07:39:49.979 |
2017-03-16 07:39:49.979 | Command failed: 'NoneType' object has no attribute 'replace'
I just want to check my unit tests write in muranoPL right or not, like pep8 or py27.
Can I run tox -e murano-test-runner like tox -e py27 without connect to keystone server and neutron server? If can not , why? if I can, how? mock keystoneclient and neutronclient? or other ways?
some one can help me? thanks.

I mock:
keystoneclient.v3.Client(and deal with auth_uri.replace('v2.0', 'v3') AttributeError Exception)
murano.engine.system.net_explorer.NetworkExplorer._get_client
then murano-test-runner can run without config-opts os-* or config-file

Related

expected <block end>, but found '<scalar>' in 'reader' in .ebextensions error

My command in .ebextensions is failing with the following error:
The configuration file .ebextensions/setvars.config in application version code-pipeline-1674590101261-00ce5911c1ae7ca5d24c81474ba76352008bc540 contains invalid YAML or JSON. YAML exception: Invalid Yaml: while parsing a block mapping in 'reader', line 3, column 5: command: "/opt/elasticbeanstalk/ ... ^ expected <block end>, but found '<scalar>' in 'reader', line 3, column 93: ... nt | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' > ... ^ , JSON exception: Invalid JSON: Unexpected character (c) at position 0.. Update the configuration file.
The .ebextensions/setvars.config contains:
commands:
setvars:
command: "/opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' >> ~/.bash_profile"
packages:
yum:
jq: []
The snippet itself has been copied from was docs. Can someone please help me fixing this issue?
I want to run the command inside the double quotes in the elastic beanstalk when the application is starting.
EDIT: I added a backslash before the first quote which was making the YAML valid but then I got this error in cfn-init.log
2023-01-24 20:45:59,593 [ERROR] Command setvars (/"/opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' >> ~/.bash_profile") failed
2023-01-24 20:45:59,593 [ERROR] Error encountered during build of prebuild_0_checked_backend: Command setvars failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 579, in run_config
CloudFormationCarpenter(config, self._auth_config, self.strict_mode).build(worklog)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 278, in build
self._config.commands)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/command_tool.py", line 127, in apply
raise ToolError(u"Command %s failed" % name)
cfnbootstrap.construction_errors.ToolError: Command setvars failed
2023-01-24 20:45:59,596 [ERROR] -----------------------BUILD FAILED!------------------------
2023-01-24 20:45:59,596 [ERROR] Unhandled exception during build: Command setvars failed
Traceback (most recent call last):
File "/opt/aws/bin/cfn-init", line 181, in <module>
worklog.build(metadata, configSets, strict_mode)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 137, in build
Contractor(metadata, strict_mode).build(configSets, self)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 567, in build
self.run_config(config, worklog)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 579, in run_config
CloudFormationCarpenter(config, self._auth_config, self.strict_mode).build(worklog)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 278, in build
self._config.commands)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/command_tool.py", line 127, in apply
raise ToolError(u"Command %s failed" % name)
cfnbootstrap.construction_errors.ToolError: Command setvars failed
You do not need quotations at the beginning and the end:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' >> ~/.bash_profile
packages:
yum:
jq: []

Getting 'invalid reference format' error on localstack when use 'aws lambda invoke'

I'm using localstack/terraform/aws (latest versions) to play with lambda on aws locally. The configuration can be found here https://github.com/wentao-daommo/aws-local
While I can successfully setup/deploy everything and list my lambda function via 'aws lambda list-functions'. I was unable to invoke the function with command
aws --endpoint-url=http://localhost:4566 lambda invoke --function-name=handler --payload='' test.json
From the command line, I got error
{
"StatusCode": 200,
"FunctionError": "Unhandled",
"LogResult": "",
"ExecutedVersion": "$LATEST"
}
and from localstack, I saw this error message, which I don't understand at all
localstack_1 | ERROR: 'docker create --rm --name "localstack_lambda_arn_aws_lambda_us-east-1_000000000000_function_handler" --entrypoint /bin/bash --interactive -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -e HOSTNAME="$HOSTNAME" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e AWS_LAMBDA_EVENT_BODY='{}' -e LOCALSTACK_HOSTNAME=192.168.65.2 -e EDGE_PORT=4566 -e _HANDLER=exports.handler -e AWS_LAMBDA_FUNCTION_TIMEOUT=3 -e AWS_LAMBDA_FUNCTION_NAME=handler -e AWS_LAMBDA_FUNCTION_VERSION='$LATEST' -e AWS_LAMBDA_FUNCTION_INVOKED_ARN=arn:aws:lambda:us-east-1:000000000000:function:handler -e AWS_LAMBDA_COGNITO_IDENTITY='{}' -e _LAMBDA_SERVER_PORT=5002 "lambci/lambda:"': exit code 1; output: b'invalid reference format\n'
localstack_1 | 2021-01-26T04:08:07:INFO:localstack.services.awslambda.lambda_api: Error executing Lambda function arn:aws:lambda:us-east-1:000000000000:function:handler: Command 'docker create --rm --name "localstack_lambda_arn_aws_lambda_us-east-1_000000000000_function_handler" --entrypoint /bin/bash --interactive -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -e HOSTNAME="$HOSTNAME" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e AWS_LAMBDA_EVENT_BODY='{}' -e LOCALSTACK_HOSTNAME=192.168.65.2 -e EDGE_PORT=4566 -e _HANDLER=exports.handler -e AWS_LAMBDA_FUNCTION_TIMEOUT=3 -e AWS_LAMBDA_FUNCTION_NAME=handler -e AWS_LAMBDA_FUNCTION_VERSION='$LATEST' -e AWS_LAMBDA_FUNCTION_INVOKED_ARN=arn:aws:lambda:us-east-1:000000000000:function:handler -e AWS_LAMBDA_COGNITO_IDENTITY='{}' -e _LAMBDA_SERVER_PORT=5002 "lambci/lambda:"' returned non-zero exit status 1. Traceback (most recent call last):
localstack_1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 550, in run_lambda
localstack_1 | result = LAMBDA_EXECUTOR.execute(func_arn, func_details, event, context=context,
localstack_1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 178, in execute
localstack_1 | return do_execute()
localstack_1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 170, in do_execute
localstack_1 | return _run(func_arn=func_arn)
localstack_1 | File "/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py", line 149, in wrapped
localstack_1 | raise e
localstack_1 | File "/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py", line 145, in wrapped
localstack_1 | result = func(*args, **kwargs)
localstack_1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 161, in _run
localstack_1 | raise e
localstack_1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 149, in _run
localstack_1 | result = self._execute(func_arn, func_details, event, context, version)
localstack_1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 399, in _execute
localstack_1 | return super(LambdaExecutorReuseContainers, self)._execute(func_arn, *args, **kwargs)
localstack_1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 323, in _execute
localstack_1 | cmd = self.prepare_execution(func_details, environment, command)
localstack_1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 366, in prepare_execution
localstack_1 | container_info = self.prime_docker_container(func_details, env_vars.items(), lambda_cwd)
localstack_1 | File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 474, in prime_docker_container
localstack_1 | run(cmd)
localstack_1 | File "/opt/code/localstack/localstack/utils/common.py", line 1312, in run
localstack_1 | return do_run(cmd)
localstack_1 | File "/opt/code/localstack/localstack/utils/common.py", line 1309, in do_run
localstack_1 | return bootstrap.run(cmd, **kwargs)
localstack_1 | File "/opt/code/localstack/localstack/utils/bootstrap.py", line 656, in run
localstack_1 | raise e
localstack_1 | File "/opt/code/localstack/localstack/utils/bootstrap.py", line 616, in run
localstack_1 | output = subprocess.check_output(cmd, shell=True, stderr=stderr, env=env_dict, cwd=cwd)
localstack_1 | File "/usr/lib/python3.8/subprocess.py", line 411, in check_output
localstack_1 | return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
localstack_1 | File "/usr/lib/python3.8/subprocess.py", line 512, in run
localstack_1 | raise CalledProcessError(retcode, process.args,
localstack_1 | subprocess.CalledProcessError: Command 'docker create --rm --name "localstack_lambda_arn_aws_lambda_us-east-1_000000000000_function_handler" --entrypoint /bin/bash --interactive -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -e HOSTNAME="$HOSTNAME" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e AWS_LAMBDA_EVENT_BODY='{}' -e LOCALSTACK_HOSTNAME=192.168.65.2 -e EDGE_PORT=4566 -e _HANDLER=exports.handler -e AWS_LAMBDA_FUNCTION_TIMEOUT=3 -e AWS_LAMBDA_FUNCTION_NAME=handler -e AWS_LAMBDA_FUNCTION_VERSION='$LATEST' -e AWS_LAMBDA_FUNCTION_INVOKED_ARN=arn:aws:lambda:us-east-1:000000000000:function:handler -e AWS_LAMBDA_COGNITO_IDENTITY='{}' -e _LAMBDA_SERVER_PORT=5002 "lambci/lambda:"' returned non-zero exit status 1.
Please help. Thanks!
Your handler is incorrect and you are missing runtime. I would also recommend using standard main.js in the form of:
exports.handler = async (event) => {
return {
statusCode: 200,
body: JSON.stringify("Hi from lambda on localstack")
}
};
Then, your aws_lambda_function with correct handler and runtime should be:
resource "aws_lambda_function" "lambda" {
filename = "lambda_file.zip"
function_name = "handler"
runtime = "nodejs12.x"
role = aws_iam_role.iam_for_lambda.arn
handler = "main.handler"
source_code_hash = filebase64sha256(data.archive_file.lambda_file.output_path)
}

Access denied to S3 via TaskCat

I want to use Taskcat for my deployments. Everything is nice, except (as always) the permissions. I created a bucket for my templates, which is referred to in the config files. I call taskcat test run and after the template is uploaded to my bucket I receive an error, that the stack creation failed due to S3 error: Access Denied.
Since I'm able to upload the template via TaskCat, I have the correct permission with my account. Do I need to add a bucket permission, that Cloudformation can access the bucket?
The error code:
_ _ _
| |_ __ _ ___| | _____ __ _| |_
| __/ _` / __| |/ / __/ _` | __|
| || (_| \__ \ < (_| (_| | |_
\__\__,_|___/_|\_\___\__,_|\__|
version 0.9.23
[WARN ] : ---
[WARN ] : Linting detected issues in: mypath/template.yml
[WARN ] : line 14 [2001] [Check if Parameters are Used] Parameter AZone3 not used.
[INFO ] : Will not delete bucket created outside of taskcat task-cat-bucket
[ERROR ] : ClientError An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
Exception ignored in: <function Pool.__del__ at 0x7f9593cec790>
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/pool.py", line 268, in __del__
self._change_notifier.put(None)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 368, in put
self._writer.send_bytes(obj)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/usr/lib/python3.8/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
OSError: [Errno 9] Bad file descriptor
Exception ignored in: <function Pool.__del__ at 0x7f9593cec790>
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/pool.py", line 268, in __del__
self._change_notifier.put(None)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 368, in put
self._writer.send_bytes(obj)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/usr/lib/python3.8/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
OSError: [Errno 9] Bad file descriptor
When you are launching the CloudFormation Stack via the Console, the user you are logged in with, its credentials are used for all the operations around.
When you say you can upload to the S3 bucket doesn't directly translate to you can download objects as well.
So check your configured credentials if you have the necessary permissions for the operation.

SageMaker end point deployment keeps failing trying to find a dependency file

SageMaker end point deployment keeps failing trying to find a dependency file.
In the below example, I'm using the "lables.txt" in a function load_lables() that gets called in the model_fn() function
project folder structure
--model (directory)
|--code (directory)
|--requirements.txt
|--train.py (entry point)
|--labels.txt
|--notebook_train_deploy.ipynb
train.py
def load_labels(file_name_category='labels.txt'):
labels = list()
with open(file_name_category) as label_file:
for line in label_file:
labels.append(line.strip().split(' ')[0][:])
_out_labels = tuple(labels)
return _out_labels
def model_fn(model_dir):
labels = load_labels()
num_labels = len(labels)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_ft = models.resnet18(pretrained=True)
.
.
.
with open(os.path.join(model_dir, 'model.pth'), 'rb') as f:
model_ft .load_state_dict(torch.load(f))
model_ft .eval()
return model_ft .to(device)
notebook_train_deploy.ipynb
pytorch_model = PyTorchModel(model_data='s3://sagemaker-poc/model.tar.gz',
role=role,
source_dir='code',
entry_point='train.py',
framework_version='1.0.0',
dependencies=['./code/labels.txt']
)
predictor = pytorch_model.deploy(
instance_type='ml.t2.medium',
initial_instance_count=1)
ERROR
algo-1-esw8d_1 | [2020-04-29 19:33:40 +0000] [22] [ERROR] Error handling request /ping
algo-1-esw8d_1 | Traceback (most recent call last):
algo-1-esw8d_1 | File "/usr/local/lib/python3.6/dist-packages/sagemaker_containers/_functions.py", line 85, in wrapper
algo-1-esw8d_1 | return fn(*args, **kwargs)
algo-1-esw8d_1 | File "/usr/local/lib/python3.6/dist-packages/train.py", line 49, in model_fn
algo-1-esw8d_1 | labels = load_labels()
algo-1-esw8d_1 | File "/usr/local/lib/python3.6/dist-packages/train.py", line 25, in load_labels
algo-1-esw8d_1 | with open(os.path.join(file_name_category)) as label_file:
algo-1-esw8d_1 | FileNotFoundError: [Errno 2] No such file or directory: 'labels.txt'

Django tests failing when run with all test cases

I have a problem with tests. When I run some tests I launch separately, they pass. When all together then fail.
#mock.patch(
'apps.abstract.validators.years_range_is_not_future', new=fake_years_range_is_not_future
)
def test_create_building_with_validation_of_foundation_period(self):
self.c.login(username=self.user.username, password='111')
response = self.c.post(
'/en/api/buildings/',
data={
'name': "New Building",
'foundation_period': {
'lower': MIN_INT_VALUE,
'upper': MAX_INT_VALUE
},
'stream': {
'uuid': s_uuid(self.stream)
}
}
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
I read about this problem here
why would a django test fail only when the full test suite is run?
and tried to patch the validator in the serializer file as shown here
#mock.patch(
'apps.buildings.api.serializers.years_range_is_not_future', new=fake_years_range_is_not_future
)
def test_create_building_with_validation_of_foundation_period(self):
..............................................................
but then I get an incomprehensible for me exception
Error
Traceback (most recent call last):
File "/usr/lib/python3.5/unittest/mock.py", line 1049, in _dot_lookup
return getattr(thing, comp)
AttributeError: module 'apps.buildings.api' has no attribute 'serializers'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.5/unittest/mock.py", line 1149, in patched
arg = patching.__enter__()
File "/usr/lib/python3.5/unittest/mock.py", line 1205, in __enter__
self.target = self.getter()
File "/usr/lib/python3.5/unittest/mock.py", line 1375, in <lambda>
getter = lambda: _importer(target)
File "/usr/lib/python3.5/unittest/mock.py", line 1062, in _importer
thing = _dot_lookup(thing, comp, import_path)
File "/usr/lib/python3.5/unittest/mock.py", line 1051, in _dot_lookup
__import__(import_path)
File "/home/env/project/apps/buildings/api/serializers.py", line 12, in <module>
from apps.communities.api.serializers import CommunityBriefSerializer
File "/home/env/project/apps/communities/api/serializers.py", line 297, in <module>
class CommunityOfficialRequestBuildingSerializer(BaseCommunityOfficialRequestSerializer):
File "/home/rp/env/project/apps/communities/api/serializers.py", line 299, in CommunityOfficialRequestBuildingSerializer
from apps.buildings.api.serializers import BuildingBriefSerializer
ImportError: cannot import name 'BuildingBriefSerializer'
help please understand what I'm doing wrong
project structure (__init__.py files not listed)
project
|__apps
|__communities
| |_api
| |_serializers.py
|
|__buildings
| |_api
| | |_serializers.py
| |
| |_tests
| |_test.py
|
|_abstract
|_validators.py
Seeing this,
Traceback (most recent call last):
File "/home/rp/env/project/apps/communities/api/serializers.py", line 299, in CommunityOfficialRequestBuildingSerializer
from apps.buildings.api.serializers import BuildingBriefSerializer
suggests that your import statement is inside a class or a def or some other statement.
Maybe the import statement is executed after you mocked apps.buildings.api.serializers. If you move this import to the top of the file, then BuildingBriefSerializer will probably become available before apps.buildings.api.serializers is mocked, and your tests will pass.
This would also explain why the tests run, when you run them individually.