I want to use Taskcat for my deployments. Everything is nice, except (as always) the permissions. I created a bucket for my templates, which is referred to in the config files. I call taskcat test run and after the template is uploaded to my bucket I receive an error, that the stack creation failed due to S3 error: Access Denied.
Since I'm able to upload the template via TaskCat, I have the correct permission with my account. Do I need to add a bucket permission, that Cloudformation can access the bucket?
The error code:
_ _ _
| |_ __ _ ___| | _____ __ _| |_
| __/ _` / __| |/ / __/ _` | __|
| || (_| \__ \ < (_| (_| | |_
\__\__,_|___/_|\_\___\__,_|\__|
version 0.9.23
[WARN ] : ---
[WARN ] : Linting detected issues in: mypath/template.yml
[WARN ] : line 14 [2001] [Check if Parameters are Used] Parameter AZone3 not used.
[INFO ] : Will not delete bucket created outside of taskcat task-cat-bucket
[ERROR ] : ClientError An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
Exception ignored in: <function Pool.__del__ at 0x7f9593cec790>
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/pool.py", line 268, in __del__
self._change_notifier.put(None)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 368, in put
self._writer.send_bytes(obj)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/usr/lib/python3.8/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
OSError: [Errno 9] Bad file descriptor
Exception ignored in: <function Pool.__del__ at 0x7f9593cec790>
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/pool.py", line 268, in __del__
self._change_notifier.put(None)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 368, in put
self._writer.send_bytes(obj)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/usr/lib/python3.8/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
OSError: [Errno 9] Bad file descriptor
When you are launching the CloudFormation Stack via the Console, the user you are logged in with, its credentials are used for all the operations around.
When you say you can upload to the S3 bucket doesn't directly translate to you can download objects as well.
So check your configured credentials if you have the necessary permissions for the operation.
Related
My command in .ebextensions is failing with the following error:
The configuration file .ebextensions/setvars.config in application version code-pipeline-1674590101261-00ce5911c1ae7ca5d24c81474ba76352008bc540 contains invalid YAML or JSON. YAML exception: Invalid Yaml: while parsing a block mapping in 'reader', line 3, column 5: command: "/opt/elasticbeanstalk/ ... ^ expected <block end>, but found '<scalar>' in 'reader', line 3, column 93: ... nt | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' > ... ^ , JSON exception: Invalid JSON: Unexpected character (c) at position 0.. Update the configuration file.
The .ebextensions/setvars.config contains:
commands:
setvars:
command: "/opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' >> ~/.bash_profile"
packages:
yum:
jq: []
The snippet itself has been copied from was docs. Can someone please help me fixing this issue?
I want to run the command inside the double quotes in the elastic beanstalk when the application is starting.
EDIT: I added a backslash before the first quote which was making the YAML valid but then I got this error in cfn-init.log
2023-01-24 20:45:59,593 [ERROR] Command setvars (/"/opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' >> ~/.bash_profile") failed
2023-01-24 20:45:59,593 [ERROR] Error encountered during build of prebuild_0_checked_backend: Command setvars failed
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 579, in run_config
CloudFormationCarpenter(config, self._auth_config, self.strict_mode).build(worklog)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 278, in build
self._config.commands)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/command_tool.py", line 127, in apply
raise ToolError(u"Command %s failed" % name)
cfnbootstrap.construction_errors.ToolError: Command setvars failed
2023-01-24 20:45:59,596 [ERROR] -----------------------BUILD FAILED!------------------------
2023-01-24 20:45:59,596 [ERROR] Unhandled exception during build: Command setvars failed
Traceback (most recent call last):
File "/opt/aws/bin/cfn-init", line 181, in <module>
worklog.build(metadata, configSets, strict_mode)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 137, in build
Contractor(metadata, strict_mode).build(configSets, self)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 567, in build
self.run_config(config, worklog)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 579, in run_config
CloudFormationCarpenter(config, self._auth_config, self.strict_mode).build(worklog)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 278, in build
self._config.commands)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/command_tool.py", line 127, in apply
raise ToolError(u"Command %s failed" % name)
cfnbootstrap.construction_errors.ToolError: Command setvars failed
You do not need quotations at the beginning and the end:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' >> ~/.bash_profile
packages:
yum:
jq: []
I'm trying to use s3fs in python to connect to an s3 bucket. The associated credentials are saved in a profile called 'pete' in ~/.aws/credentials:
[default]
aws_access_key_id=****
aws_secret_access_key=****
[pete]
aws_access_key_id=****
aws_secret_access_key=****
This seems to work in AWS CLI (on Windows):
$>aws s3 ls s3://my-bucket/ --profile pete
PRE other-test-folder/
PRE test-folder/
But I get a permission denied error when I use what should be equivalent code using the s3fs package in python:
import s3fs
import requests
s3 = s3fs.core.S3FileSystem(profile = 'pete')
s3.ls('my-bucket')
I get this error:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 504, in _lsdir
async for i in it:
File "C:\ProgramData\Anaconda3\lib\site-packages\aiobotocore\paginate.py", line 32, in __anext__
response = await self._make_request(current_kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\aiobotocore\client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<ipython-input-9-4627a44a7ac3>", line 5, in <module>
s3.ls('ma-baseball')
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 993, in ls
files = maybe_sync(self._ls, self, path, refresh=refresh)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 97, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 68, in sync
raise exc.with_traceback(tb)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 52, in f
result[0] = await future
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 676, in _ls
return await self._lsdir(path, refresh)
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 527, in _lsdir
raise translate_boto_error(e) from e
PermissionError: Access Denied
I have to assume it's not a config issue within s3 because I can access s3 through the CLI. So something must be off with my s3fs code, but I can't find a whole lot of documentation on profiles in s3fs to figure out what's going on. Any help is of course appreciated.
SageMaker end point deployment keeps failing trying to find a dependency file.
In the below example, I'm using the "lables.txt" in a function load_lables() that gets called in the model_fn() function
project folder structure
--model (directory)
|--code (directory)
|--requirements.txt
|--train.py (entry point)
|--labels.txt
|--notebook_train_deploy.ipynb
train.py
def load_labels(file_name_category='labels.txt'):
labels = list()
with open(file_name_category) as label_file:
for line in label_file:
labels.append(line.strip().split(' ')[0][:])
_out_labels = tuple(labels)
return _out_labels
def model_fn(model_dir):
labels = load_labels()
num_labels = len(labels)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_ft = models.resnet18(pretrained=True)
.
.
.
with open(os.path.join(model_dir, 'model.pth'), 'rb') as f:
model_ft .load_state_dict(torch.load(f))
model_ft .eval()
return model_ft .to(device)
notebook_train_deploy.ipynb
pytorch_model = PyTorchModel(model_data='s3://sagemaker-poc/model.tar.gz',
role=role,
source_dir='code',
entry_point='train.py',
framework_version='1.0.0',
dependencies=['./code/labels.txt']
)
predictor = pytorch_model.deploy(
instance_type='ml.t2.medium',
initial_instance_count=1)
ERROR
algo-1-esw8d_1 | [2020-04-29 19:33:40 +0000] [22] [ERROR] Error handling request /ping
algo-1-esw8d_1 | Traceback (most recent call last):
algo-1-esw8d_1 | File "/usr/local/lib/python3.6/dist-packages/sagemaker_containers/_functions.py", line 85, in wrapper
algo-1-esw8d_1 | return fn(*args, **kwargs)
algo-1-esw8d_1 | File "/usr/local/lib/python3.6/dist-packages/train.py", line 49, in model_fn
algo-1-esw8d_1 | labels = load_labels()
algo-1-esw8d_1 | File "/usr/local/lib/python3.6/dist-packages/train.py", line 25, in load_labels
algo-1-esw8d_1 | with open(os.path.join(file_name_category)) as label_file:
algo-1-esw8d_1 | FileNotFoundError: [Errno 2] No such file or directory: 'labels.txt'
I m trying to execute the folowing python code:
import logging
import sys
import docker, boto3
from base64 import b64decode
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
LOCAL_REPOSITORY = '111111111111.dkr.ecr.us-east-1.amazonaws.com/my_repo:latest'
image = '111111111111.dkr.ecr.us-east-1.amazonaws.com/my_repo'
ecr_registry, _ = image.split('/', 1)
client = docker.from_env()
# Get login credentials from AWS for the ECR registry.
ecr = boto3.client('ecr')
response = ecr.get_authorization_token()
token = b64decode(response['authorizationData'][0]['authorizationToken'])
username, password = token.decode('utf-8').split(':', 1)
# Log in to the ECR registry with Docker.
client.login(username, password, registry=ecr_registry)
logging.info("loggined")
client.images.pull(image, auth_config={
username: username,
password: password
})
And got exception:
C:\myPath>python app/pull_example.py
INFO:botocore.credentials:Found credentials in environment variables.
INFO:root:loggined
Traceback (most recent call last):
File "C:\Python3\lib\site-packages\docker\api\client.py", line 261, in _raise_for_status
response.raise_for_status()
File "C:\Python3\lib\site-packages\requests\models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localnpipe/v1.35/images/create?fromImage=111111111111.dkr.ecr.us-east-1.amazonaws.com%2Fmy_repo
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "app/pull_example.py", line 41, in <module>
password: password
File "C:\Python3\lib\site-packages\docker\models\images.py", line 445, in pull
repository, tag=tag, stream=True, **kwargs
File "C:\Python3\lib\site-packages\docker\api\image.py", line 415, in pull
self._raise_for_status(response)
File "C:\Python3\lib\site-packages\docker\api\client.py", line 263, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "C:\Python3\lib\site-packages\docker\errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("Get https://111111111111.dkr.ecr.us-east-1.amazonaws.com/v2/my_repo/tags/list: no basic auth credentials")
What is the problem? Why I can not pull image even after client.login call which happens wihtout any exeptions. What is the correct way to perform login and pull image from ECR repository and dockerpy?
This was happen due to - https://github.com/docker/docker-py/issues/2157
Deleting ~/.docker/config.json fixed the issue.
I would like to ask if it is currently possible to use spark-ec2 script https://spark.apache.org/docs/latest/ec2-scripts.html together with credentials that are consisting not only from: aws_access_key_id and aws_secret_access_key, but it also contains aws_security_token.
When I try to run the script I am getting following error message:
ERROR:boto:Caught exception reading instance data
Traceback (most recent call last):
File "/Users/zikes/opensource/spark/ec2/lib/boto-2.34.0/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1184, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 64] Host is down>
ERROR:boto:Unable to read instance data, giving up
No handler was ready to authenticate. 1 handlers were checked. ['QuerySignatureV2AuthHandler'] Check your credentials
Does anyone has some idea what can be possibly wrong? Is aws_security_token the problem?
It maybe seems to me more as boto than Spark problem.
I have tried both:
1) setting credentials in ~/.aws/credentials and ~/.aws/config
2) setting credential by commands:
export aws_access_key_id=<my_aws_access_key>
export aws_secret_access_key=<my_aws_seecret_key>
export aws_security_token=<my_aws_security_token>
My launch command is:
./spark-ec2 -k my_key -i my_key.pem --additional-tags "mytag:tag1,mytag2:tag2" --instance-profile-name "profile1" -s 1 launch test
you can setup your credentials & config using the command aws configure.
I had the same issue but in my case my AWS_SECRET_ACCESS_KEY had a slash, I regenerated the key until there was no slash and it worked
The problem was that I did not use profile called default after renaming everything worked well.