AWS Lambda gets the content of a private wiki in Azure - amazon-web-services

I'm trying to access a private wiki in azure (I'm admin in the project) but using the a lambda in AWS.
import requests
import base64
def lambda_handler(event, context):
# Replace with your own Azure DevOps organization name
organization_name = "name of the organization"
# Replace with the project name that contains the wiki
project_name = "project name"
# Replace with the name of the wiki
wiki_name = "wiki name"
# Replace with the version of the wiki to retrieve
wiki_version = "1234"
# Replace with your own personal access token (PAT)
personal_access_token = "my pat"
# Build the URL to retrieve the wiki content
url = f"https://dev.azure.com/{organization_name}/{project_name}/_apis/wiki/wikis/{wiki_name}/{wiki_version}/content?api-version=7.0"
# Set the authorization header with the base64-encoded PAT
auth_header = f"Basic {base64.b64encode(f'{personal_access_token}'.encode('utf-8')).decode('utf-8')}"
headers = {
"Authorization": auth_header,
"Accept": "application/json"
}
# Make the request to retrieve the wiki content
response = requests.get(url, headers=headers)
# Return the response if successful, otherwise return an error message
if response.status_code == 200:
return response.json()
else:
My lambda has a security group allowing all inbound and outbound, and public subnet.
Also lambda has a very permisive role lol
But I always get a
"[ERROR] ConnectionError: HTTPSConnectionPool(host='dev.azure.com', port=443): Max retries exceeded with url:xxxxx (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f405ba34790>: Failed to establish a new connection: [Errno 110] Connection timed out'))"
Not sure if the issue is the lambda or in azure.
I already tried changing the permissions and changing the role of aws lamdba.
Also I got another PAT with all permissions allowed.

Related

Enable google Cloud Resource Manager API by python client API

if the api is not enabled, usually error will be like
HttpError: <HttpError 403 when requesting https://cloudresourcemanager.googleapis.com/v1/projects?alt=json returned "Cloud Resource Manager API has not been used in project 684373208471 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/overview?project=xxxxxxxxxx then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.". Details: "[{'#type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Google developers console API activation', 'url': 'https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/overview?project=xxxxxxxxx'}]}, {'#type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'SERVICE_DISABLED', 'domain': 'googleapis.com', 'metadata': {'consumer': 'projects/xxxxxxxxxxx', 'service': 'cloudresourcemanager.googleapis.com'}}]">
Does google provide API for enabling some api?
Google Cloud provides an API to enable APIs. This is called Service Usage API.
Service Usage REST API
Service Usage Client Libraries
Example code that I wrote to display a list of enabled APIs in Python:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
project = 'projects/myproject'
service = discovery.build('serviceusage', 'v1', credentials=credentials)
request = service.services().list(parent=project)
response = ''
try:
response = request.execute()
except Exception as e:
print(e)
exit(1)
# FIX - This code does not process the nextPageToken
# next = response.get('nextPageToken')
services = response.get('services')
for index in range(len(services)):
item = services[index]
name = item['config']['name']
state = item['state']
print("%-50s %s" % (name, state))
The output of this code looks similar to this (truncated for this answer):
abusiveexperiencereport.googleapis.com DISABLED
acceleratedmobilepageurl.googleapis.com DISABLED
accessapproval.googleapis.com DISABLED
accesscontextmanager.googleapis.com DISABLED
actions.googleapis.com DISABLED
adexchangebuyer-json.googleapis.com DISABLED
adexchangebuyer.googleapis.com DISABLED
adexchangeseller.googleapis.com DISABLED
adexperiencereport.googleapis.com DISABLED
admin.googleapis.com ENABLED
adsense.googleapis.com DISABLED
API Documentation for my example code

How to set credentials to use Gmail API from GCE VM Command Line?

I'm trying to enable my Linux VM on GCE to access my Gmail account in order to send emails.
I've found this article, https://developers.google.com/gmail/api/quickstart/python, which reads some Gmail account information (it's useful since I just want to test my connection).
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
# If modifying these scopes, delete the file token.pickle.
SCOPES = ['https://www.googleapis.com/auth/gmail.readonly']
def main():
"""Shows basic usage of the Gmail API.
Lists the user's Gmail labels.
"""
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('gmail', 'v1', credentials=creds)
# Call the Gmail API
results = service.users().labels().list(userId='me').execute()
labels = results.get('labels', [])
if not labels:
print('No labels found.')
else:
print('Labels:')
for label in labels:
print(label['name'])
if __name__ == '__main__':
main()
However, I'm not sure which Credentials I need to set, as when I set and use:
Service Account: I receive the following message ValueError: Client secrets must be for a web or installed app.
OAuth client ID for Web Application type: the code runs well, however I receive the following message when trying to first authorize the application's access:
Erro 400: redirect_uri_mismatch
The redirect URI in the request, http://localhost:60443/, does not match the ones authorized for the OAuth client. To update the authorized redirect URIs, visit: https://console.developers.google.com/apis/credentials/oauthclient/${your_client_id}?project=${your_project_number}
OAuth client ID for Desktop type: the code runs well, however I receive the following message when trying to first authorize the application's access:
localhost connection has been refused
Does anyone know how is the correct setup and if I'm missing anything?
Thanks
[Nov17th]
After adding the gmail scope to my VM's scopes I ran the python script and I got the following error:
Traceback (most recent call last):
File "quickstart2.py", line 29, in <module>
main()
File "quickstart2.py", line 18, in main
results = service.users().labels().list(userId="107628233971924038182").execute()
File "/home/lucasnunesfe9/.local/lib/python3.7/site-packages/googleapiclient/_helpers.py", line 134, in positional_wrapper
return wrapped(*args, **kwargs)
File "/home/lucasnunesfe9/.local/lib/python3.7/site-packages/googleapiclient/http.py", line 915, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://gmail.googleapis.com/gmail/v1/users/107628233971924038182/labels?alt=json returned "Precondition check failed.">
I checked the error HTTP link and it shows:
{
"error": {
"code": 401,
"message": "Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.",
"errors": [
{
"message": "Login Required.",
"domain": "global",
"reason": "required",
"location": "Authorization",
"locationType": "header"
}
],
"status": "UNAUTHENTICATED"
}
}
Is any manual procedure needed in a "first authorization step"?
PS: reinforcing that I have already enabled my Service Account to "G Suite Domain-wide Delegation". This action generated an OAuth 2.0 Client ID, which is being used in the python script (variable userId).
Personally, I never understood this example. I think it's too old (and even compliant Python 2.6!!)
Anyway, you can forget the first part and get a credential like this
from googleapiclient.discovery import build
import google.auth
# If modifying these scopes, delete the file token.pickle.
SCOPES = ['https://www.googleapis.com/auth/gmail.readonly']
def main():
creds, project_id = google.auth.default(scopes=SCOPES)
service = build('gmail', 'v1', credentials=creds)
# Call the Gmail API
results = service.users().labels().list(userId='me').execute()
labels = results.get('labels', [])
if not labels:
print('No labels found.')
else:
print('Labels:')
for label in labels:
print(label['name'])
if __name__ == '__main__':
main()
However, because you will use a service account to access to your GMAIL account, you have to change the userId with the ID that you want and to grand access from the GSuite admin console to the service account to have access to GMAIL API.
To achieve this, you need to grant the scope of your VM service account. Stop the VM, run this (long) command, and start your VM again. The command:
Takes the current scopes of your VM (and clean/format them)
Add the gmail scope
Set the scope to the VM (it's a BETA command)
gcloud beta compute instances set-scopes <YOUR_VM_NAME> --zone <YOUR_VM_ZONE> \
--scopes=https://www.googleapis.com/auth/gmail.readonly,\
$(gcloud compute instances describe <YOUR_VM_NAME> --zone <YOUR_VM_ZONE> \
--format json | jq -c ".serviceAccounts[0].scopes" | sed -E "s/\[(.*)\]/\1/g" | sed -E "s/\"//g")

How I can get access to GCP cloud function from python code using service account?

I've deployed a simple GCP Cloud Function which returns "Hello World!".
I need this function to be under authorization. I unmarked "Allow unauthenticated invocations" checkbox, so only authenticated invocations can call this code.
I also created Service Account and give next roles:
- Cloud Functions Invoker
- Cloud Functions Service Agent
my code:
from google.oauth2 import service_account
from google.auth.transport.urllib3 import AuthorizedHttp
if __name__ == '__main__':
credentials = service_account.Credentials.from_service_account_file('service-account.json',
scopes=['https://www.googleapis.com/auth/cloud-platform'],
subject='service-acc#<project_id>.iam.gserviceaccount.com')
authed_session = AuthorizedHttp(credentials)
response = authed_session.urlopen('POST', 'https://us-central1-<project_id>.cloudfunctions.net/main')
print(response.data)
and I've got response:
b'\n<html><head>\n<meta http-equiv="content-type" content="text/html;charset=utf-8">\n<title>401 Unauthorized</title>\n</head>\n<body text=#000000 bgcolor=#ffffff>\n<h1>Error: Unauthorized</h1>\n<h2>Your client does not have permission to the requested URL <code>/main</code>.</h2>\n<h2></h2>\n</body></html>\n'
How to become authorized?
Your example code is generating an access token. Below is a real example that generates an identity token and uses that token to call a Cloud Functions endpoint. The Function needs to have the Cloud Function Invoker role for the service account being used for authorization.
import json
import base64
import requests
import google.auth.transport.requests
from google.oauth2.service_account import IDTokenCredentials
# The service account JSON key file to use to create the Identity Token
sa_filename = 'service-account.json'
# Endpoint to call
endpoint = 'https://us-east1-replace_with_project_id.cloudfunctions.net/main'
# The audience that this ID token is intended for (example Google Cloud Functions service URL)
aud = 'https://us-east1-replace_with_project_id.cloudfunctions.net/main'
def invoke_endpoint(url, id_token):
headers = {'Authorization': 'Bearer ' + id_token}
r = requests.get(url, headers=headers)
if r.status_code != 200:
print('Calling endpoint failed')
print('HTTP Status Code:', r.status_code)
print(r.content)
return None
return r.content.decode('utf-8')
if __name__ == '__main__':
credentials = IDTokenCredentials.from_service_account_file(
sa_filename,
target_audience=aud)
request = google.auth.transport.requests.Request()
credentials.refresh(request)
# This is debug code to show how to decode Identity Token
# print('Decoded Identity Token:')
# print_jwt(credentials.token.encode())
response = invoke_endpoint(endpoint, credentials.token)
if response is not None:
print(response)
You need to get an identity token to be able to (http) trigger your cloud function AND
your service account needs to have at least role Cloud Functions Invoker.
(To keep it as safe as possible create a service account that can only invoke cloud functions, nothing else -> to avoid issues when your service account key file falls in the wrong hands.)
Then you can run the following:
import os
import google.oauth2.id_token
import google.auth.transport.requests
import requests
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'service_account_key_file.json'
google_auth_request = google.auth.transport.requests.Request()
cloud_functions_url = 'https://europe-west1-your_project_id.cloudfunctions.net/your_function_name'
id_token = google.oauth2.id_token.fetch_id_token(
google_auth_request, cloud_functions_url)
headers = {'Authorization': f'Bearer {id_token}'}
response = requests.get(cloud_functions_url, headers=headers)
print(response.content)
This question + answers helped me also:
How can I retrieve an id_token to access a Google Cloud Function?
If you need an access token via python (instead of a identity token), check here:
How to get a GCP Bearer token programmatically with python
The Error is because of calls not being Authenticated.
As the calls now are from your localhost this should work for you.
curl https://REGION-PROJECT_ID.cloudfunctions.net/FUNCTION_NAME \
-H "Authorization: bearer $(gcloud auth print-identity-token)"
this will make the call with the account you have active on gcloud from your localhost.
The account making the call must have the role Cloud Functions Invoker
In the future just make sure that the service is calling this function has the role mentioned.

boto3 generate_presigned_url with SSE encryption

I am looking for examples to generate presigned url using boto3 and sse encryption.
Here is my code so far
s3_client = boto3.client('s3',
region_name='ap-south-1',
endpoint_url='http://s3.ap-south-1.amazonaws.com',
config=boto3.session.Config(signature_version='s3v4'),
)
try:
response = s3_client.generate_presigned_url('put_object',
Params={'Bucket': bucket_name,
'Key': object_name},
ExpiresIn=expiration)
except ClientError as e:
logging.error("In client error exception code")
logging.error(e)
return None
I am struggling to find the right parameters to use SSE encryption.
I am able to use PUT call to upload a file. I would also like to know the headers to use from the client side to adhere to sse encryption.
import boto3
access_key = "..."
secret_key = "..."
bucket = "..."
s3 = boto3.client('s3',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key)
return(s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket,
'Key': filename,
'SSECustomerAlgorithm': 'AES256',
}
))
Also add the header:-
'x-amz-server-side-encryption': 'AES256'
in the front end code while calling the presigned url
You can add Conditions to the pre-signed URL that must be met for the upload to be valid. This could probably include x-amz-server-side-encryption.
See: Creating a POST Policy - Amazon S3
Alternatively, you could add a bucket policy that denies any request that is not encrypted.
See: How to Prevent Uploads of Unencrypted Objects to Amazon S3 | AWS Security Blog

Swisscom Appcloud S3 Connection reset by peer

We have a Django Webservice that uses Swisscom AppCloud's S3 solution. So far we had no problems, but without changing anything on the application we are experiencing ConnectionError: ('Connection aborted.', error(104, 'Connection reset by peer')) errors when we are trying to upload files. We are using boto3 1.4.4.
Edit:
The error occures after somwhere between 10 and 30s. When I try from my local development machine it works.
from django.conf import settings
from boto3 import session
from botocore.exceptions import ClientError
class S3Client(object):
def __init__(self):
s3_session = session.Session()
self.s3_client = s3_session.client(
service_name='s3',
aws_access_key_id=settings.AWS_ACCESS_KEY,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
endpoint_url=settings.S3_ENDPOINT,
)
.
.
.
def add_file(self, bucket, fileobj, file_name):
self.s3_client.upload_fileobj(fileobj, bucket, file_name)
url = self.s3_client.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket,
'Key': file_name
},
ExpiresIn=60*24*356*10 # signed for 10 years. Should be enough..
)
url, signature = self._split_signed_url(url)
return url, signature, file_name
Could this be a version problem or anything else on our side?
Edit:
Made some tests with s3cmd: I can list the buckets I have access to but for all other commands like listing all objects or just listing the objects in a bucket I get a Retrying failed request: / ([Errno 54] Connection reset by peer)
After some investigation I found the error:
Swisscom's implementation of S3 is somehow not up-to-date with Amazon's. To solve the problem I had to downgrade botocore from 1.5.78 to 1.5.62.