I'm not getting the expected response from client.describe_image_scan_findings() using Boto3 - amazon-web-services

I'm trying to use Boto3 to get the number of vulnerabilities from my images in my repositories. I have a list of repository names and image IDs that are getting passed into this function. Based off their documentation
I'm expecting a response like this when I filter for ['imageScanFindings']
'imageScanFindings': {
'imageScanCompletedAt': datetime(2015, 1, 1),
'vulnerabilitySourceUpdatedAt': datetime(2015, 1, 1),
'findingSeverityCounts': {
'string': 123
},
'findings': [
{
'name': 'string',
'description': 'string',
'uri': 'string',
'severity': 'INFORMATIONAL'|'LOW'|'MEDIUM'|'HIGH'|'CRITICAL'|'UNDEFINED',
'attributes': [
{
'key': 'string',
'value': 'string'
},
]
},
],
What I really need is the
'findingSeverityCounts' number, however, it's not showing up in my response. Here's my code and the response I get:
main.py
repo_names = ['cftest/repo1', 'your-repo-name', 'cftest/repo2']
image_ids = ['1.1.1', 'latest', '2.2.2']
def get_vuln_count(repo_names, image_ids):
container_inventory = []
client = boto3.client('ecr')
for n, i in zip(repo_names, image_ids):
response = client.describe_image_scan_findings(
repositoryName=n,
imageId={'imageTag': i}
)
findings = response['imageScanFindings']
print(findings)
Output
{'findings': []}
The only thing that shows up is findings and I was expecting findingSeverityCounts in the response along with the others, but nothing else is showing up.
THEORY
I have 3 repositories and an image in each repository that I uploaded. One of my theories is that I'm not getting the other responses, such as findingSeverityCounts because my images don't have vulnerabilities? I have inspector set-up to scan on push, but they don't have vulnerabilities so nothing shows up in the inspector dashboard. Could that be causing the issue? If so, how would I be able to generate a vulnerability in one of my images to test this out?

My theory was correct and when there are no vulnerabilities, the response completely omits certain values, including the 'findingSeverityCounts' value that I needed.
I created a docker image using python 2.7 to generate vulnerabilities in my scan to test out my script properly. My work around was to implement this if statement- if there's vulnerabilities it will return them, if there aren't any vulnerabilities, that means 'findingSeverityCounts' is omitted from the response, so I'll have it return 0 instead of giving me a key error.
Example Solution:
response = client.describe_image_scan_findings(
repositoryName=n,
imageId={'imageTag': i}
)
if 'findingSeverityCounts' in response['imageScanFindings']:
print(response['imageScanFindings']['findingSeverityCounts'])
else:
print(0)

Related

How do get the Linked_Account_Name while calling the cost explorer API

I have the below code to get the cost explorer details using boto3 which will give the data on the basis of account_id.I need the details on the basis of Linked_account_Name. Can someone guide me how to proceed..
response = ce.get_cost_and_usage(
TimePeriod={
'Start': '2020-01-01',
'End': '2020-01-03'
},
Granularity='MONTHLY',
Metrics=[
'UnblendedCost',
],
GroupBy=[
{
'Type': 'DIMENSION',
'Key': 'LINKED_ACCOUNT'
},
]
LINKED_ACCOUNT_NAME is not valid for all three context(COST_AND_USAGE','RESERVATIONS','SAVINGS_PLANS).
Dimensions are also limited to LINKED_ACCOUNT , REGION , or RIGHTSIZING_TYPE in get_cost_and_usage()
So, you won't be able to use it.
You can use
get_dimension_values()
use this link for more info
function to get the Linked Account name.
client = session.client('ce')
response = client.get_dimension_values(
SearchString='123456789098',
TimePeriod={
'Start': '2020-01-01',
'End': '2020-03-01'
},
Dimension='LINKED_ACCOUNT',
Context='COST_AND_USAGE'
)
for each in response['DimensionValues']:
print('Account Name is ->', each['Attributes']['description'])
and output will be like below:
Account Name is -> Test 0100
Its not a complete answer but you can proceed from here.

Getting Lex bot checksum value

I am trying to update a Lex bot using the python SDK put_bot function. I need to pass a checksum value of the function as specified here. How do I get this value?
So far I have tried using below functions and the checksum values returned from these
The get_bot with the prod alias
The get_bot_alias function
The get_bot_aliases function
Is the checksum value
Lex Model Building Service:
checksum (string) --
Identifies a specific revision of the $LATEST version.
When you create a new bot, leave the checksum field blank. If you specify a checksum you get a BadRequestException exception.
When you want to update a bot, set the checksum field to the checksum of the most recent revision of the $LATEST version. If you don't specify the checksum field, or if the checksum does not match the $LATEST version, you get a PreconditionFailedException exception.
You should first retrieve the checksum of your bot if you want to update it.
You should be able to use the same checksum that is returned from get_bot_aliases().
This is the example response from get_bot_aliases() function.
{
'BotAliases': [
{
'name': 'string',
'description': 'string',
'botVersion': 'string',
'botName': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'checksum': 'string', --checksum here
'conversationLogs': {
'logSettings': [
{
'logType': 'AUDIO'|'TEXT',
'destination': 'CLOUDWATCH_LOGS'|'S3',
'kmsKeyArn': 'string',
'resourceArn': 'string',
'resourcePrefix': 'string'
},
],
'iamRoleArn': 'string'
}
},
],
'nextToken': 'string'
}
If you are trying to update intent, first do get_intent and save the checksum and use it in put_intent, If you are using the put_bot api, then first get_bot and save the checksum in that , use it in put_bot api

Boto3 athena query without saving data to s3

I am trying to use boto3 to run a set of queries and don't want to save the data to s3. Instead I just want to get the results and want to work with those results. I am trying to do the following
import boto3
client = boto3.client('athena')
response = client.start_query_execution(
QueryString='''SELECT * FROM mytable limit 10''',
QueryExecutionContext={
'Database': 'my_db'
}.
ResultConfiguration={
'OutputLocation': 's3://outputpath',
}
)
print(response)
But here I don't want to give ResultConfiguration because I don't want to write the results anywhere. But If I remove the ResultConfiguration parameter I get the following error
botocore.exceptions.ParamValidationError: Parameter validation failed:
Missing required parameter in input: "ResultConfiguration"
So it seems like giving s3 output location for writing is mendatory. So what could the way to avoid this and get the results only in response?
The StartQueryExecution action indeed requires a S3 output location. The ResultConfiguration parameter is mandatory.
The alternative way to query Athena is using JDBC or ODBC drivers. You should probably use this method if you don't want to store results in S3.
You will have to specify an S3 temp bucket location whenever running the 'start_query_execution' command. However, you can get a result set (a dict) by running the 'get_query_results' method using the query id.
The response (dict) will look like this:
{
'UpdateCount': 123,
'ResultSet': {
'Rows': [
{
'Data': [
{
'VarCharValue': 'string'
},
]
},
],
'ResultSetMetadata': {
'ColumnInfo': [
{
'CatalogName': 'string',
'SchemaName': 'string',
'TableName': 'string',
'Name': 'string',
'Label': 'string',
'Type': 'string',
'Precision': 123,
'Scale': 123,
'Nullable': 'NOT_NULL'|'NULLABLE'|'UNKNOWN',
'CaseSensitive': True|False
},
]
}
},
'NextToken': 'string'
}
For more information, see boto3 client doc: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/athena.html#Athena.Client.get_query_results
You can then delete all files in the S3 temp bucket you've specified.
You still need to provide s3 as temporary location for Athena to save the data although you want to process the data using python. But you can page through the data as tuple using Pagination API. please refer to the example here. Hope that helps

How to send GET request to API

Summary: I have a job board, a user searches a zip code and all the jobs matching that zip code are displayed, I am trying to add a feature that lets you see jobs within a certain mile radius of that zip code. There is a web API ( www.zipcodeapi.com ) that does these calculations and returns zip codes within the specified radius, I am just unsure how to use it.
Using www.zipcodeapi.com , you enter a zip code and a distance and it returns all zip codes within this distance. The format for API request is as follows: https://www.zipcodeapi.com/rest/<api_key>/radius.<format>/<zip_code>/<distance>/<units>, so if a user enters zip code '10566' and a distance of 5 miles, the format would be https://www.zipcodeapi.com/rest/<api_key>/radius.json/10566/5/miles and this would return:
{
"zip_codes": [
{
"zip_code": "10521",
"distance": 4.998,
"city": "Croton On Hudson",
"state": "NY"
},
{
"zip_code": "10548",
"distance": 3.137,
"city": "Montrose",
"state": "NY"
}
#etc...
]
}
My question is how do I send a GET request to the API using django?
I have the user searched zip code stored in zip = request.GET.get('zip') and the mile radius stored in mile_radius = request.GET['mile_radius']. How can I incorporate those two values in their respective spots in https://www.zipcodeapi.com/rest/<api_key>/radius.<format>/<zip_code>/<distance>/<units> and send the request? Can this be done with Django or do I have this all confused? Does it need to be done with a frontend language? I have tried to search this on google but only find this for RESTful APIS, and I dont think this is what I am looking for. Thanks in advance for any help, if you couldn't tell i've never worked with a web API before.
You can use the requests package, to do exactly what you want. It's pretty straightforward and has good documentation.
Here's an example of how you could perform it for your case:
zip_code = request.GET.get('zip')
mile_radius = request.GET['mile_radius']
api_key = YOUR_API_KEY
fmt = 'json'
units = 'miles'
response = requests.get(
url=f'https://www.zipcodeapi.com/rest/{api_key}/radius.{fmt}/{zip_code}/{mile_radius}/{units}')
zip_codes = response.json().get('zip_codes')
zip_codes should then be an array with those dicts as in your example.

How to use Google Place Add in Python

I'm using Google Place API for Web Service, in Python.
And I'm trying to add places like the tutorial here
My code is here:
from googleplaces import GooglePlaces, types, lang, GooglePlacesError, GooglePlacesAttributeError
API_KEY = "[Google Place API KEY]"
google_places = GooglePlaces(API_KEY)
try:
added_place = google_places.add_place(
name='Mom and Pop local store',
lat_lng={'lat': 51.501984, 'lng': -0.141792},
accuracy=100,
types=types.TYPE_HOME_GOODS_STORE,
language=lang.ENGLISH_GREAT_BRITAIN)
except GooglePlacesError as error_detail:
print error_detail
But I kept getting this error:
I tried to change the input into Json format or Python dictionary format, then it gave the error "google_places.add_place() only accept 1 parameter, 2 give"......
Is there any right way to use Google Place API Add Place method in Python?
Oh, finally I found the solution, it's so simple... I am not familiar with Python POST requests, in fact everything is easy.
Just need the code here, and we will be able to add a Place in Google Place API, with Python:
import requests
post_url = "https://maps.googleapis.com/maps/api/place/add/json?key=" + [YOUR_API_KEY]
r = requests.post(post_url, json={
"location": {
"lat": -33.8669710,
"lng": 151.1958750
},
"accuracy": 50,
"name": "Google Shoes!",
"phone_number": "(02) 9374 4000",
"address": "48 Pirrama Road, Pyrmont, NSW 2009, Australia",
"types": ["shoe_store"],
"website": "http://www.google.com.au/",
"language": "en-AU"
})
To check the results:
print r.status_code
print r.json()