I am running tests in Jenkins for a Django application, but received 403 errors when running the tests that upload to S3. The values are exported as environment variables and accessed in the settings file with values.Value() (https://django-configurations.readthedocs.org/en/stable/values/).
# settings.py
AWS_ACCESS_KEY_ID = values.Value()
AWS_SECRET_ACCESS_KEY = values.Value()
My console output looks like this:
[EnvInject] - Injecting as environment variables the properties content
AWS_ACCESS_KEY_ID='ABC123'
AWS_SECRET_ACCESS_KEY='blah'
[EnvInject] - Variables injected successfully.
...
+ python manage.py test
======================================================================
ERROR: test_document (documents.tests.DocTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/lib/jenkins/jobs/Job/workspace/myapp/documents/tests.py", line 25, in setUp
self.document.doc.save('test_file', File(f), save=True)
File "/var/lib/jenkins/jobs/Job/workspace/.venv/local/lib/python2.7/site-packages/django/db/models/fields/files.py", line 89, in save
self.name = self.storage.save(name, content)
File "/var/lib/jenkins/jobs/Job/workspace/.venv/local/lib/python2.7/site-packages/django/core/files/storage.py", line 51, in save
name = self._save(name, content)
File "/var/lib/jenkins/jobs/Job/workspace/.venv/local/lib/python2.7/site-packages/storages/backends/s3boto.py", line 385, in _save
key = self.bucket.get_key(encoded_name)
File "/var/lib/jenkins/jobs/Job/workspace/.venv/local/lib/python2.7/site-packages/boto/s3/bucket.py", line 192, in get_key
key, resp = self._get_key_internal(key_name, headers, query_args_l)
File "/var/lib/jenkins/jobs/Job/workspace/.venv/local/lib/python2.7/site-packages/boto/s3/bucket.py", line 230, in _get_key_internal
response.status, response.reason, '')
S3ResponseError: S3ResponseError: 403 Forbidden
-------------------- >> begin captured logging << --------------------
boto: DEBUG: path=/documents/test_file
boto: DEBUG: auth_path=/my-bucket/documents/test_file
boto: DEBUG: Method: HEAD
boto: DEBUG: Path: /documents/test_file
boto: DEBUG: Data:
boto: DEBUG: Headers: {}
boto: DEBUG: Host: my-bucket.s3.amazonaws.com
boto: DEBUG: Port: 443
boto: DEBUG: Params: {}
boto: DEBUG: Token: None
boto: DEBUG: StringToSign:
HEAD
Sun, 13 Sep 2015 06:02:36 GMT
/my-bucket/documents/test_file
boto: DEBUG: Signature:
AWS 'ABC123':RanDoM123#*$
boto: DEBUG: Final headers: {'Date': 'Sun, 13 Sep 2015 06:02:36 GMT', 'Content-Length': '0', 'Authorization': u"AWS 'ABC123':RanDoM123#*$, 'User-Agent': 'Boto/2.38.0 Python/2.7.3 Linux/3.2.0-4-amd64'}
boto: DEBUG: Response headers: [('x-amz-id-2', 'MoReRanDom123*&^'), ('server', 'AmazonS3'), ('transfer-encoding', 'chunked'), ('x-amz-request-id', '3484RANDOM19394'), ('date', 'Sun, 13 Sep 2015 06:02:36 GMT'), ('content-type', 'application/xml')]
--------------------- >> end captured logging << ---------------------
Am I missing something important in order to upload files to S3 from Jenkins? I'm having no issues on my local machine.
Does your CORS configuration on this bucket have any restrictions per IP? For example, if AllowedOrigin specifies the IP, it could be one reason why it fails.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
I would also print out your AWS values on Jenkins for debugging, just to confirm that the correct values are being used in that environment.
Related
I have this code in my Django project:
# implememtation
module_dir = os.path.dirname(os.path.dirname(os.path.dirname(__file__))) # get current directory
box_config_path = os.path.join(module_dir, 'py_scripts/transactapi_funded_trades/config.json') # the config json downloaded
config = JWTAuth.from_settings_file(box_config_path) #creating a config via the json file
client = Client(config) #creating a client via config
user_to_impersonate = client.user(user_id='8********6') #iget main user
user_client = client.as_user(user_to_impersonate) #impersonate main user
The above code is what I use to transfer the user from the service account created by Box to the main account user with ID 8********6. No error is thrown so far, but when I try to implement the actual logic to retrieve the files, I get this:
[2022-09-13 02:50:26,146: INFO/MainProcess] GET https://api.box.com/2.0/folders/0/items {'headers': {'As-User': '8********6',
'Authorization': '---LMHE',
'User-Agent': 'box-python-sdk-3.3.0',
'X-Box-UA': 'agent=box-python-sdk/3.3.0; env=python/3.10.4'},
'params': {'offset': 0}}
[2022-09-13 02:50:26,578: WARNING/MainProcess] "GET https://api.box.com/2.0/folders/0/items?offset=0" 403 0
{'Date': 'Mon, 12 Sep 2022 18:50:26 GMT', 'Transfer-Encoding': 'chunked', 'x-envoy-upstream-service-time': '100', 'www-authenticate': 'Bearer realm="Service", error="insufficient_scope", error_description="The request requires higher privileges than provided by the access token."', 'box-request-id': '07cba17694f7ea32f0c2cd42790bce39e', 'strict-transport-security': 'max-age=31536000', 'Via': '1.1 google', 'Alt-Svc': 'h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"'}
b''
[2022-09-13 02:50:26,587: WARNING/MainProcess] Message: None
Status: 403
Code: None
Request ID: None
Headers: {'Date': 'Mon, 12 Sep 2022 18:50:26 GMT', 'Transfer-Encoding': 'chunked', 'x-envoy-upstream-service-time': '100', 'www-authenticate': 'Bearer realm="Service", error="insufficient_scope", error_description="The request requires higher privileges than provided by the access token."', 'box-request-id': '07cba17694f7ea32f0c2cd42790bce39e', 'strict-transport-security': 'max-age=31536000', 'Via': '1.1 google', 'Alt-Svc': 'h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"'}
URL: https://api.box.com/2.0/folders/0/items
Method: GET
Context Info: None
It says it needs higher access. What might I be doing wrong? I've been stuck with this particular problem for a little over a week now so any help is highly appreciated.
Can you test to see if the user is in fact being impersonated?
Something like this:
from boxsdk import JWTAuth, Client
def main():
"""main function"""
auth = JWTAuth.from_settings_file('./.jwt.config.json')
auth.authenticate_instance()
client = Client(auth)
me = client.user().get()
print(f"Service account user: {me.id}:{me.name}")
user_id_to_impersonate = '18622116055'
folder_of_user_to_impersonate = '0'
user_to_impersonate = client.user(user_id=user_id_to_impersonate).get()
# the .get() is just to be able to print the impersonated user
print(f"User to impersonate: {user_to_impersonate.id}:{user_to_impersonate.name}")
user_client = client.as_user(user_to_impersonate)
items = user_client.folder(folder_id=folder_of_user_to_impersonate).get_items()
print(f"Items in folder:{items}")
# we need a loop to actually get the items info
for item in items:
print(f"Item: {item.type}\t{item.id}\t{item.name}")
if __name__ == '__main__':
main()
Check out my output:
Service account user: 20344589936:UI-Elements-Sample
User to impersonate: 18622116055:Rui Barbosa
Items in folder:<boxsdk.pagination.limit_offset_based_object_collection.LimitOffsetBasedObjectCollection object at 0x105fffe20>
Item: folder 172759373899 Barduino User Folder
Item: folder 172599089223 Bookings
Item: folder 162833533610 Box Reports
Item: folder 163422716106 Box UI Elements Demo
my proposal:
I want add network IP range with prefix in web UI and need to process using command
ipaddress.IPv4Network(subnet).hosts())
then after IP range will crate and save all range of IP into DATABASE.
I have tried deferent methods still not able to complete my requirement.
some one could help about.
below the code which I made.
def Indexping(request):
form = IpModelForm
Ipform = {'form':form}
if request.method=='POST':
subnet = IpModelForm(request.POST)
if subnet.is_valid:
data= list(ipaddress.IPv4Network(subnet).hosts())
for f in data:
#f = [x for x in subnet]
f.save()
Exception Value:
Only one '/' permitted in
Getting Below Error:
AddressValueError at /cbv/ind/
Only one '/' permitted in
Request Method: POST
Request URL: http://127.0.0.1:8000/cbv/ind/
Django Version: 4.0.2
Exception Type: AddressValueError
Exception Value:
Only one '/' permitted in
Exception Location: D:\Program Files\Python\Python39\lib\ipaddress.py, line 162, in _split_optional_netmask
Python Executable: E:\Django_Projects\Portal-env\Scripts\python.exe
Python Version: 3.9.10
Python Path:
['E:\Django_Projects\Portal-env\portal',
'D:\Program Files\Python\Python39\python39.zip',
'D:\Program Files\Python\Python39\DLLs',
'D:\Program Files\Python\Python39\lib',
'D:\Program Files\Python\Python39',
'E:\Django_Projects\Portal-env',
'E:\Django_Projects\Portal-env\lib\site-packages']
Server time: Sat, 19 Feb 2022 09:35:57 +0000
I have an gitlab ci yaml file. and 2 jobs. My .gitlab-ci.yaml file is:
variables:
MSBUILD_PATH: 'C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe'
SOLUTION_PATH: 'Source/NewProject.sln'
stages:
- build
- trigger_IT_service
build_job:
stage: build
script:
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
trigger_IT_service_job:
stage: trigger_IT_service
script:
- 'curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer'
And It's my trigger_IT_service job report:
Running on DIGITALIZATION...
00:00
Fetching changes with git depth set to 50...
00:05
Reinitialized existing Git repository in D:/GitLab-Runner/builds/c11pExsu/0/personalname/newproject/.git/
Checking out 24be087a as master...
Removing Output/
git-lfs/2.5.2 (GitHub; windows amd64; go 1.10.3; git 8e3c5c93)
Skipping Git submodules setup
$ curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer
00:02
StatusCode : 200
StatusDescription : 200
Content : {"status":200,"message":"SAP transfer started. Please
check in db","errorCode":0,"timestamp":"2020-03-25T13:53:05
.722+0300","responseObject":null}
RawContent : HTTP/1.1 200 200
Keep-Alive: timeout=10
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json;charset=UTF-8
Date: Wed, 25 Mar 2020 10:53:05 GMT
Server: Apache
I have to control the this report "Content" part in gitlab ci yaml
If "message" is "SAP transfer started. Please check in db" the pipeline should pass otherwise must be failed.
Actually my question is:
how to parse Http json response and fail or pass job based on that
Thank you for all your helps.
Best way would be to install some tool to parse json and use it, different examples here
Given json example from comment:
{
"status": 200,
"message": "SAP transfer started. Please check in db",
"errorCode": 0,
"timestamp": "2020-03-25T17:06:43.430+0300",
"responseObject": null
}
If you can install python3 on your runner you could achieve it all with script:
import requests; # note this might require additional install with pip install requests
message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']
if message != 'SAP transfer started. Please check in db':
print('Invalid message: ' + message)
exit(1)
else:
print('Message ok')
So trigger_IT_service stage in your yaml would be:
trigger_IT_service_job:
stage: trigger_IT_service
script: >
python -c "import requests; message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']; (print('Invalid message: ' + message), exit(1)) if message != 'SAP transfer started. Please check in db' else (print('Message ok'), exit(0))"
I'm running into trouble with an Apache Beam pipline on Google Cloud Dataflow.
The pipeline is simple: Reading json from GCS, extracting text from some nested fields, writing back to GCS.
It works fine when testing with a smaller subset of input files but when I run it on the full data set, I get the following error (after running fine through around 260M items).
Somehow the "worker eventually lost contact with the service"
(8662a188e74dae87): Workflow failed. Causes: (95e9c3f710c71bc2): S04:ReadFromTextWithFilename/Read+FlatMap(extract_text_from_raw)+RemoveLineBreaks+FormatText+WriteText/Write/WriteImpl/WriteBundles/Do+WriteText/Write/WriteImpl/Pair+WriteText/Write/WriteImpl/WindowInto(WindowIntoFn)+WriteText/Write/WriteImpl/GroupByKey/Reify+WriteText/Write/WriteImpl/GroupByKey/Write failed., (da6389e4b594e34b): A work item was attempted 4 times without success. Each time the worker eventually lost contact with the service. The work item was attempted on:
extract-tags-150110997000-07261602-0a01-harness-jzcn,
extract-tags-150110997000-07261602-0a01-harness-828c,
extract-tags-150110997000-07261602-0a01-harness-3w45,
extract-tags-150110997000-07261602-0a01-harness-zn6v
The Stacktrace shows a Failed to update work status/Progress reporting thread got error error:
Exception in worker loop: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 776, in run deferred_exception_details=deferred_exception_details) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 629, in do_work exception_details=exception_details) File "/usr/local/lib/python2.7/dist-packages/apache_beam/utils/retry.py", line 168, in wrapper return fun(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 490, in report_completion_status exception_details=exception_details) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 298, in report_status work_executor=self._work_executor) File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/workerapiclient.py", line 333, in report_status self._client.projects_locations_jobs_workItems.ReportStatus(request)) File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/dataflow/internal/clients/dataflow/dataflow_v1b3_client.py", line 467, in ReportStatus config, request, global_params=global_params) File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 723, in _RunMethod return self.ProcessHttpResponse(method_config, http_response, request) File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 729, in ProcessHttpResponse self.__ProcessHttpResponse(method_config, http_response, request)) File "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line 600, in __ProcessHttpResponse http_response.request_url, method_config, request) HttpError: HttpError accessing <https://dataflow.googleapis.com/v1b3/projects/qollaboration-live/locations/us-central1/jobs/2017-07-26_16_02_36-1885237888618334364/workItems:reportStatus?alt=json>: response: <{'status': '400', 'content-length': '360', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Wed, 26 Jul 2017 23:54:12 GMT', 'x-frame-options': 'SAMEORIGIN', 'content-type': 'application/json; charset=UTF-8'}>, content <{ "error": { "code": 400, "message": "(7f8a0ec09d20c3a3): Failed to publish the result of the work update. Causes: (7f8a0ec09d20cd48): Failed to update work status. Causes: (afa1cd74b2e65619): Failed to update work status., (afa1cd74b2e65caa): Work \"6306998912537661254\" not leased (or the lease was lost).", "status": "INVALID_ARGUMENT" } } >
And Finally:
HttpError: HttpError accessing <https://dataflow.googleapis.com/v1b3/projects/[projectid-redacted]/locations/us-central1/jobs/2017-07-26_18_28_43-10867107563808864085/workItems:reportStatus?alt=json>: response: <{'status': '400', 'content-length': '358', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Thu, 27 Jul 2017 02:00:10 GMT', 'x-frame-options': 'SAMEORIGIN', 'content-type': 'application/json; charset=UTF-8'}>, content <{ "error": { "code": 400, "message": "(5845363977e915c1): Failed to publish the result of the work update. Causes: (5845363977e913a8): Failed to update work status. Causes: (44379dfdb8c2b47): Failed to update work status., (44379dfdb8c2e88): Work \"9100669328839864782\" not leased (or the lease was lost).", "status": "INVALID_ARGUMENT" } } >
at __ProcessHttpResponse (/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py:600)
at ProcessHttpResponse (/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py:729)
at _RunMethod (/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py:723)
at ReportStatus (/usr/local/lib/python2.7/dist-packages/apache_beam/runners/dataflow/internal/clients/dataflow/dataflow_v1b3_client.py:467)
at report_status (/usr/local/lib/python2.7/dist-packages/dataflow_worker/workerapiclient.py:333)
at report_status (/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py:298)
at report_completion_status (/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py:490)
at wrapper (/usr/local/lib/python2.7/dist-packages/apache_beam/utils/retry.py:168)
at do_work (/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py:629)
at run (/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py:776)
This looks like an error to the data flow internals to me. Can anyone confirm? Are there any workarounds?
The HttpError typically appears after the workflow has failed and is part of the failure/teardown process.
It looks like there were others error reported in your pipeline, such as the following. Note that if the same elements fail 4 times it will be marked failing.
Try looking the Stack Traces section in the UI to identify the other errors and their stack traces. Since this only occurs on the larger dataset, consider the possibility of their being malformed elements that only exist in the larger dataset.
I'm trying to use mailgun in my Django project hosted on Pythonanywhere.
In my WSGI file, I have:
os.environ['DJANGO_MAILGUN_SERVER_NAME'] = 'https://api.mailgun.net/v3/sandboxnumbersomething.mailgun.org/messages'
os.environ['DJANGO_MAILGUN_API_KEY'] ='mykey'
and my settings are:
# EMAIL
# ------------------------------------------------------------------------------
DEFAULT_FROM_EMAIL = env('DJANGO_DEFAULT_FROM_EMAIL',
default='Apitrak <noreply#apitrak.com>')
EMAIL_BACKEND = 'django_mailgun.MailgunBackend'
MAILGUN_ACCESS_KEY = env('DJANGO_MAILGUN_API_KEY')
MAILGUN_SERVER_NAME = env('DJANGO_MAILGUN_SERVER_NAME')
When my app fire an email (for example at signup), I have a 404 error:
MailgunAPIError at /accounts/email/
<Response [404]>
Request Method: POST
Request URL: https://vincentle.pythonanywhere.com/accounts/email/
Django Version: 1.8.6
Exception Type: MailgunAPIError
Exception Value:
<Response [404]>
Exception Location: /home/vincentle/.virtualenvs/apitrak/lib/python3.4/site-packages/django_mailgun.py in _send, line 154
Python Executable: /usr/local/bin/uwsgi
Python Version: 3.4.0
Python Path:
['/var/www',
'.',
'',
'/var/www',
'/home/vincentle/.virtualenvs/apitrak/lib/python3.4',
'/home/vincentle/.virtualenvs/apitrak/lib/python3.4/plat-x86_64-linux-gnu',
'/home/vincentle/.virtualenvs/apitrak/lib/python3.4/lib-dynload',
'/usr/lib/python3.4',
'/usr/lib/python3.4/plat-x86_64-linux-gnu',
'/home/vincentle/.virtualenvs/apitrak/lib/python3.4/site-packages',
'/home/vincentle/apitrak']
Server time: Tue, 17 Nov 2015 16:02:28 +0100
I've tried a curl in the virtualenv of my WebApp:
curl -s --user 'api:key-NUMBERS' https://api.mailgun.net/v3/NUMBERS.mailgun.org/messages -F from='Excited User <excited#samples.mailgun.org>' -F to='vincent#vincentle.fr' -F subject='Hello' -F text='Testing some Mailgun awesomeness!'
And this works OK.
The setting DJANGO_MAILGUN_SERVER_NAME should be a domain name, not a url.
Try the following:
os.environ['DJANGO_MAILGUN_SERVER_NAME'] = '<sandboxnumbersomething>.mailgun.org'
From the readme:
Replace SERVER-NAME with the last part of your "API Base URL" (e.g. https://api.mailgun.net/v3/<your_server_name>), also found in your Mailgun account details.