How to authenticate a user to AWS cognito with pyscript? - amazon-web-services

I want to authenticate a user to AWS cognito from within pyscript.
In python, one would use the python boto3 module. In pyscript however, I have to make the calls to AWS running in async mode, and I have to await the answer. boto3 doesn't support asyncio calls.
I found the python aioboto3 module.
Unfortunatly, pyscript reports that it's unable to install the package:
PY1001): Unable to install package(s) 'aioboto3'. Reason: Can't find a pure Python 3 Wheel for package(s) 'aioboto3'
Same error (for another package) is mentioned in
Couldn't find a pure Python 3 wheel for 'tensorflow'. You can use `micropip.install(..., keep_going=True)` to get a list of all packages
Anybody an idea how I can interface with AWS from pyscript?

Related

EMR Serverless Airflow Operator not allowing EMR custom images

I want to launch a Spark job on EMR Serverless from Airflow. I want to use Spark 3.3.0 and Scala 2.13 but the 6.9.0 EMR Release ships with Scala 2.12. I created a FAT jar including all Spark dependencies and it won't work either. As an alternative, I am trying to use an EMR custom image by creating an application using --image-configuration with the Airflow operator but it won't just pass through all the arguments from the boto API.
create_app = EmrServerlessCreateApplicationOperator(
task_id="create_my_app",
job_type="SPARK",
release_label="emr-6.9.0",
config={"name": "data-ingestion",
"imageConfiguration": {
"imageUri": "xxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/emr-custom-image:0.0.1"}})
Airflow gives the following error message:
Unknown parameter in input: "imageConfiguration", must be one of:
name, releaseLabel, type, clientToken, initialCapacity, maximumCapacity, tags, autoStartConfiguration, autoStopConfiguration, networkConfiguration
This other config won't work either:
config={"name": "data-ingestion",
"imageUri": "xxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/emr-custom-image:0.0.1"})
Does anybody have any ideas other than downgrading my Scala version?
Airflow operator passes the argument to the boto3 client, and this client create the application.
The configuration imageConfiguration is added to boto3 client in 1.26.44 (PR), and the other configuration are added in different version (please check the changelog).
So you can try to upgrade the version of boto3 in you Airflow server, provided that it is compatible with the others dependencies, and if not, you may need to upgrade your Airflow version.

Running Taurus BlazeMeter on AWS Lambda

I am trying to run a BlazeMeter Taurus script with a JMeter script inside via AWS Lambda. I'm hoping that there is a way to run bzt via a local installation in /tmp/bzt instead of looking for a bzt installation on the system which doesn't really exist since its lambda.
This is my lambda_handler.py:
import subprocess
import json
def run_taurus_test(event, context):
subprocess.call(['mkdir', '/tmp/bzt/'])
subprocess.call(['pip', 'install', '--target', '/tmp/bzt/', 'bzt'])
# subprocess.call('ls /tmp/bzt/bin'.split())
subprocess.call(['/tmp/bzt/bin/bzt', 'tests/taurus_test.yaml'])
return {
'statusCode': 200,
'body': json.dumps('Executing Taurus Test hopefully!')
}
The taurus_test.yaml runs as expected when testing on my computer with bzt installed via pip normally, so I know the issue isn't with the test script. The same traceback as below appears if I uninstall bzt from my system and try use a local installation targeted in a certain directory.
This is the traceback in the execution results:
Traceback (most recent call last):
File "/tmp/bzt/bin/bzt", line 5, in <module>
from bzt.cli import main
ModuleNotFoundError: No module named 'bzt'
It's technically failing in /tmp/bzt/bin/bzt which is the executable that's failing, and I think it is because it's not using the local/targeted installation.
So, I'm hoping there is a way to tell bzt to use keep using the targeted installation in /tmp/bzt instead of calling the executable there and then trying to pass it on to an installation that doesn't exist elsewhere. Feedback if AWS Fargate or EC2 would be better suited for this is also appreciated.
Depending on the size of the bzt package, the solutions are:
Use Lambda Docker recent feature, and this way, what you run locally is what you get on Lambda.
Use Lambda layers (similar to Docker), this layer as the btz module in the python directory as described there
When you package your Lambda, instead of uploading a simple Python file, create a ZIP file containing both: /path/to/zip_root/lambda_handler.py and pip install --target /path/to/zip_root

Accessing Airflow REST API in AWS Managed Workflows?

I have Airflow running in AWS MWAA, I would like to access REST API and there are 2 ways to do this but doesn't seem to work for me.
Overriding api.auth_backend. This used to work and now AWS MWAA won't allow you to add this, it is consider as 'blocklist' and not allow.
api.auth_backend = airflow.api.auth.backend.default
Using MWAA Cli(Python). This doesn't work if any of the DAGs uses packages that are in requirments.txt file.
a. as an example, I have "paramiko" in requirements.txt because I have a task that uses SSHOperator. The MWAA Cli fails with "no module paramiko"
b. Also noted here, https://docs.aws.amazon.com/mwaa/latest/userguide/access-airflow-ui.html
"Any command that parses a DAG (such as list_dags, backfill) will fail if the DAG uses plugins that depend on packages that are installed through requirements.txt."
We are using MWAA 2.0.2 and managed to use Airflow's Rest-API through MWAA CLI, basically following the instructions and sample codes of the Apache Airflow CLI command reference. You'll notice that not all Rest-API calls are supported, but many of them are (even when you have a requirements.txt in place).
Also have a look at AWS sample codes on GitHub.

How to use a non-JDBC connector in a custom Policy Information Point (PIP)

I want my custom Policy Information Point (PIP) to be able to connect to OpenStack Swift (an Object Storage application) thanks to this connector. The connector is able to retrieve metadata from an object, which would be sent back to the Policy Decision Point (PDP) when the right AttributeId is requested.
In our context, Swift holds information about our resources (objects here).
Below are all the steps I followed to try and use this connector in my custom PIP.
I got the connector to work locally (no integration with WSO2IS yet) following only the first 3 steps and calling the swift_test function in a debug-purposed main class.
I followed this guide to implement a custom PIP, which suggests using a database offering a JDBC driver (such as mariadb). My issue is that Swift does not offer a JDBC driver, hence the use of the openstack4j connector.
I added the needed openstack4j dependency to the maven project linked in the guide (here).
I also added the following imports to the extended class (named KMarketJDBCAttributeFinder in the guide):
package org.xacmlinfo.xacml.pip.jdbc;
import org.openstack4j.api.OSClient.OSClientV3;
import org.openstack4j.model.common.Identifier;
import org.openstack4j.model.storage.object.SwiftAccount;
import org.openstack4j.model.storage.object.SwiftContainer;
import org.openstack4j.model.storage.object.SwiftObject;
import org.openstack4j.model.storage.object.options.ObjectListOptions;
import org.openstack4j.openstack.OSFactory;
import org.wso2.carbon.identity.entitlement.pip.AbstractPIPAttributeFinder;
...
and this function to test and retrieve an object's metadata:
public void swift_test() {
OSClientV3 os = OSFactory.builderV3()
.endpoint(our_keystoneV3_url)
.credentials(our_keystone_username, password, domain_identifier)
.scopeToProject(Identifier.byName(our_tenant), Identifier.byName(our_domain))
.authenticate();
SwiftAccount account = os.objectStorage.account().get();
Map<String, String> md =
os.objectStorage.objects().getMetadata("our_container", "our_object");
System.out.println(md.toString());
}
which I call in the overriden getAttributeValues of the custom PIP class.
I then built the class using mvn package to generate the .jar which I copied in <IS_HOME>/repository/components/lib.
I downloaded all my dependencies using mvn dependency:copy-dependencies, and copied them all to the same <IS_HOME>/repository/components/lib folder.
I start the wso2is server, send a XACML request which involves calling my custom PIP, and see the following error in <IS_HOME>/repository/logs/wso2carbon.log:
ERROR {org.openstack4j.core.transport.internal.HttpExecutor} - No OpenStack4j connector found in classpath
ERROR {org.wso2.carbon.identity.entitlement.pip.CarbonAttributeFinder} - Error while retrieving attribute values from PIP attribute finder: org.openstack4j.api.exceptions.ConnectorNotFoundException: No OpenStack4j connector found in classpath
ERROR {org.wso2.balana.finder.AttributeFinder} - Error while trying to resolve values: Error while retrieving attribute values from PIP attribute finder: No OpenStack4j connector found in classpath
Our wso2is server, version 5.7.0 runs on a CentOS 7.6 VM.
My question is then: can we use this kind of connector in a custom PIP with wso2is? If so, how would I go about resolving the classpath issues between my dependencies?
P.S. : I previously added another custom PIP, connecting this time to a MariaDB database which is working well. The .jar I create when using mvn package contains both custom PIPs, and they are both recognized in the section "PDP > Extension" in the wso2is web interface.
Here is the list of dependencies obtained with the mvn command:
activation-1.1.1.jar
btf-1.2.jar
classworlds-1.1-alpha-2.jar
commons-codec-1.9.jar
commons-io-2.3.jar
commons-lang-2.6.jar
commons-logging-1.2.jar
guava-20.0.jar
httpclient-4.5.3.jar
httpcore-4.4.6.jar
jackson-annotations-2.7.0.jar
jackson-core-2.7.3.jar
jackson-core-asl-1.9.7.jar
jackson-coreutils-1.6.jar
jackson-databind-2.7.3.jar
jackson-dataformat-yaml-2.7.3.jar
jackson-jaxrs-base-2.7.3.jar
jackson-jaxrs-json-provider-2.7.3.jar
jackson-mapper-asl-1.9.7.jar
jackson-module-jaxb-annotations-2.7.3.jar
jboss-annotations-api_1.2_spec-1.0.0.Final.jar
jboss-jaxrs-api_2.0_spec-1.0.1.Beta1.jar
jboss-logging-3.3.0.Final.jar
jcip-annotations-1.0.jar
jcl-over-slf4j-1.7.2.jar
jdom2-2.0.6.jar
joss-0.10.2.jar
json-patch-1.9.jar
jsr305-2.0.0.jar
junit-3.8.1.jar
maven-artifact-2.0.jar
maven-compiler-plugin-2.0.jar
maven-plugin-api-2.0.jar
msg-simple-1.1.jar
openstack4j-3.1.0.jar
openstack4j-core-3.1.0.jar
openstack4j-resteasy-3.1.0.jar
org.wso2.carbon.identity.entitlement-4.2.0.jar
plexus-compiler-api-1.5.1.jar
plexus-compiler-javac-1.5.1.jar
plexus-compiler-manager-1.5.1.jar
plexus-container-default-1.0-alpha-8.jar
plexus-utils-1.0.4.jar
resteasy-client-3.1.4.Final.jar
resteasy-jaxrs-3.1.4.Final.jar
resteasy-jaxrs-services-3.1.4.Final.jar
slf4j-api-1.7.2.jar
snakeyaml-1.15.jar

AWS Lambda Console - Upgrade boto3 version

I am creating a DeepLens project to recognise people, when one of select group of people are scanned by the camera.
The project uses a lambda, which processes the images and triggers the 'rekognition' aws api.
When I trigger the API from my local machine - I get a good response
When I trigger the API from AWS console - I get failed response
Problem
After much digging, I found that the 'boto3' (AWS python library) is of version:
1.9.62 - on my local machine
1.8.9 - on AWS console
Question
Can I upgrade the 'boto3' library version on the AWS lambda console ?? If so, how ?
If you don't want to package a more recent boto3 version with you function, you can download boto3 with each invocation of the Lambda. Remember that /tmp/ is the directory that Lambda will allow you to download to, so you can use this to temporarily download boto3:
import sys
from pip._internal import main
main(['install', '-I', '-q', 'boto3', '--target', '/tmp/', '--no-cache-dir', '--disable-pip-version-check'])
sys.path.insert(0,'/tmp/')
import boto3
from botocore.exceptions import ClientError
def handler(event, context):
print(boto3.__version__)
You can achieve the same with either Python function with dependencies or with a Virtual Environment.
These are the available options other than that you also try to contact Amazon team if they can help you with up-gradation.
I know, you're asking for a solution through Console, but this is not possible (as of my knowledge).
To solve this you need to provide the boto3 version you require to your lambda (either with the solution from user1998671 or with what Shivang Agarwal is proposing). A third solution is to provide the required boto3 version as a layer for the lambda. The big advantage of the layer is that you can re-use it for all your lambdas.
This can be achieved by following the guide from AWS (the following is mainly copied from the linked guide from AWS):
IMPORTANT: Make sure to adjust boto3-mylayer with a for you suitable name.
Create a lib folder by running the following command:
LIB_DIR=boto3-mylayer/python
mkdir -p $LIB_DIR
Install the library to LIB_DIR by running the following command:
pip3 install boto3 -t $LIB_DIR
Zip all the dependencies to /tmp/boto3-mylayer.zip by running the following command:
cd boto3-mylayer
zip -r /tmp/boto3-mylayer.zip .
Publish the layer by running the following command:
aws lambda publish-layer-version --layer-name boto3-mylayer --zip-file fileb:///tmp/boto3-mylayer.zip
The command returns the new layer's Amazon Resource Name (ARN), similar to the following one:
arn:aws:lambda:region:$ACC_ID:layer:boto3-mylayer:1
To attach this layer to your lambda execute the following:
aws lambda update-function-configuration --function-name <name-of-your-lambda> --layers <layer ARN>
To verify the boto version in your lambda you can simply add the following two print commands in your lambda:
print(boto3.__version__)
print(botocore.__version__)