I have following program in in that want to fetch
"planned", "not automated", "st3reporter", "functional", "report-upto3times-per2hrs", "st3-throttling-cdb"
**These value from string **
import re
string='''
import org.testng.Assert;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Test;
public class ReportUpToTimesEveryHours {
String objective = "04 - Report up to 3 times every 2 hours";
String testName = "ReportUpToTimesEveryHours";
#BeforeMethod(alwaysRun = true)
public void beforeMethod() {
logger.info(Constants.LOGGER_SEPERATOR);
logger.info("Start -- " + testName + " - " + objective);
}
#Test(groups = { "planned", "not automated", "st3reporter", "functional", "report-upto3times-per2hrs", "st3-throttling-cdb" },
description = "04 - Report up to 3 times every 2 hours")
public void testReportUpToTimesEveryHours() {
}
#AfterMethod(alwaysRun = true)
public void afterMethod() {
logger.info("End -- " + testName + " - " + objective);
logger.info(Constants.LOGGER_SEPERATOR);
}
}
'''
pattern=re.compile(r' (\#Test\(groups\s*=\s*\{)')
m= pattern.search(string)
print m.group()
Try something like this:
pattern=re.compile(r' #Test\(groups\s*=\s*\{([^\}]+)')
result_set = [i.strip() for i in
pattern.search(string).group(1).replace('"', '').split(',')]
The value will store a list:
['planned', 'not automated', 'st3reporter', 'functional', 'report-upto3times-per2hrs', 'st3-throttling-cdb']
Related
I get this error when i try to compare two images from my s3 bucket, i follow the functions rules to get an image from S3Object with correct name and s3 bucket name but i throws the Invalid image format exception. Maybe it has to be as a base64 or bytebuffer? i dont understand then why there is a function to get from S3Object.
My code is simple and as follows:
String reference = "reference.jpg";
String target = "selfie.jpg";
String bucket = "pruebas";
CompareFacesRequest request = new CompareFacesRequest()
.withSourceImage(new Image().withS3Object(new S3Object()
.withName(reference).withBucket(bucket)))
.withTargetImage(new Image().withS3Object(new S3Object()
.withName(target).withBucket(bucket)))
.withSimilarityThreshold(similarityThreshold);
AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.standard()
.withRegion(Regions.US_EAST_1).build();
CompareFacesResult compareFacesResult= rekognitionClient.compareFaces(request);
The exception is thrown in compareFaces(request) the last line.
this is main part of the error:
Exception in thread "main" com.amazonaws.services.rekognition.model.InvalidImageFormatException: Request has invalid image format (Service: AmazonRekognition; Status Code: 400; Error Code: InvalidImageFormatException;
The images are in AWS S3 and my credentials for rekognition have permission to read from S3. So in that part is not the error.
UPDATE CODE:
public static void main(String[] args) throws FileNotFoundException {
Float similarityThreshold = 70F;
String reference = "reference.jpg";
String target = "target.jpg";
String bucket = "pruebas";
ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.builder().profileName("S3").build();
Region region = Region.US_EAST_1;
S3Client s3 = S3Client.builder()
.region(region)
.credentialsProvider(credentialsProvider)
.build();
byte[] sourceStream = getObjectBytes(s3, bucket,reference);
byte[] tarStream = getObjectBytes(s3, bucket, target);
SdkBytes sourceBytes = SdkBytes.fromByteArrayUnsafe(sourceStream);
SdkBytes targetBytes = SdkBytes.fromByteArrayUnsafe(tarStream);
Image souImage = Image.builder()
.bytes(sourceBytes)
.build();
Image tarImage = Image.builder()
.bytes(targetBytes)
.build();
CompareFacesRequest request = CompareFacesRequest.builder()
.sourceImage(souImage)
.targetImage(tarImage)
.similarityThreshold(similarityThreshold).build();
RekognitionClient rekognitionClient = RekognitionClient.builder()
.region(Region.US_EAST_2).build();
CompareFacesResponse compareFacesResult= rekognitionClient.compareFaces(request);
List<CompareFacesMatch> faceDetails = compareFacesResult.faceMatches();
for (CompareFacesMatch match: faceDetails){
ComparedFace face= match.face();
BoundingBox position = face.boundingBox();
System.out.println("Face at " + position.left().toString()
+ " " + position.top()
+ " matches with " + face.confidence().toString()
+ "% confidence.");
}
List<ComparedFace> uncompared = compareFacesResult.unmatchedFaces();
System.out.println("There was " + uncompared.size()
+ " face(s) that did not match");
System.out.println("Source image rotation: " + compareFacesResult.sourceImageOrientationCorrection());
System.out.println("target image rotation: " + compareFacesResult.targetImageOrientationCorrection());
}
public static byte[] getObjectBytes (S3Client s3, String bucketName, String keyName) {
try {
GetObjectRequest objectRequest = GetObjectRequest
.builder()
.key(keyName)
.bucket(bucketName)
.build();
ResponseBytes<GetObjectResponse> objectBytes = s3.getObjectAsBytes(objectRequest);
return objectBytes.asByteArray();
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return null;
}
Try using AWS SDK for Java V2 - not the old V1 lib. Using V2 is strongly recommended over V1 and is best practice.
Here is the V2 code that works fine to compare faces. In this example, notice that you have to get the image into a SdkBytes object. It does not matter where the image is located as long as you get it into SDKBytes. The image can be in an S3 bucket, the local file system. etc.
You can find this V2 Reckonation example in the AWS Github repo here:
https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/rekognition/src/main/java/com/example/rekognition
Also - you can use S3 Java V2 Service client to read an image in an S3 bucket and get the byte[]. From there, you can create an SDKBytes object to use in CompareFaces.
https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/GetObjectData.java
Here is the full Java example for Compare Faces. Notice there is an URL to the Java DEV Guide if you do not know how to get up and running with AWS SDK for Java V2 - including how to set up your creds.
package com.example.rekognition;
// snippet-start:[rekognition.java2.compare_faces.import]
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.rekognition.RekognitionClient;
import software.amazon.awssdk.services.rekognition.model.RekognitionException;
import software.amazon.awssdk.services.rekognition.model.Image;
import software.amazon.awssdk.services.rekognition.model.CompareFacesRequest;
import software.amazon.awssdk.services.rekognition.model.CompareFacesResponse;
import software.amazon.awssdk.services.rekognition.model.CompareFacesMatch;
import software.amazon.awssdk.services.rekognition.model.ComparedFace;
import software.amazon.awssdk.services.rekognition.model.BoundingBox;
import software.amazon.awssdk.core.SdkBytes;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import java.util.List;
// snippet-end:[rekognition.java2.compare_faces.import]
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class CompareFaces {
public static void main(String[] args) {
final String usage = "\n" +
"Usage: " +
" <pathSource> <pathTarget>\n\n" +
"Where:\n" +
" pathSource - The path to the source image (for example, C:\\AWS\\pic1.png). \n " +
" pathTarget - The path to the target image (for example, C:\\AWS\\pic2.png). \n\n";
if (args.length != 2) {
System.out.println(usage);
System.exit(1);
}
Float similarityThreshold = 70F;
String sourceImage = args[0];
String targetImage = args[1];
Region region = Region.US_EAST_1;
RekognitionClient rekClient = RekognitionClient.builder()
.region(region)
.credentialsProvider(ProfileCredentialsProvider.create())
.build();
compareTwoFaces(rekClient, similarityThreshold, sourceImage, targetImage);
rekClient.close();
}
// snippet-start:[rekognition.java2.compare_faces.main]
public static void compareTwoFaces(RekognitionClient rekClient, Float similarityThreshold, String sourceImage, String targetImage) {
try {
InputStream sourceStream = new FileInputStream(sourceImage);
InputStream tarStream = new FileInputStream(targetImage);
SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream);
SdkBytes targetBytes = SdkBytes.fromInputStream(tarStream);
// Create an Image object for the source image.
Image souImage = Image.builder()
.bytes(sourceBytes)
.build();
Image tarImage = Image.builder()
.bytes(targetBytes)
.build();
CompareFacesRequest facesRequest = CompareFacesRequest.builder()
.sourceImage(souImage)
.targetImage(tarImage)
.similarityThreshold(similarityThreshold)
.build();
// Compare the two images.
CompareFacesResponse compareFacesResult = rekClient.compareFaces(facesRequest);
List<CompareFacesMatch> faceDetails = compareFacesResult.faceMatches();
for (CompareFacesMatch match: faceDetails){
ComparedFace face= match.face();
BoundingBox position = face.boundingBox();
System.out.println("Face at " + position.left().toString()
+ " " + position.top()
+ " matches with " + face.confidence().toString()
+ "% confidence.");
}
List<ComparedFace> uncompared = compareFacesResult.unmatchedFaces();
System.out.println("There was " + uncompared.size() + " face(s) that did not match");
System.out.println("Source image rotation: " + compareFacesResult.sourceImageOrientationCorrection());
System.out.println("target image rotation: " + compareFacesResult.targetImageOrientationCorrection());
} catch(RekognitionException | FileNotFoundException e) {
System.out.println("Failed to load source image " + sourceImage);
System.exit(1);
}
}
// snippet-end:[rekognition.java2.compare_faces.main]
}
This code works fine too with JPG images -- as shown in the following image of an IDE debugging the code displaying the results:
UPDATE
I normally do not touch V1 code at all; however, i was curious. This code worked....
package aws.example.rekognition.image;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.rekognition.AmazonRekognition;
import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
import com.amazonaws.services.rekognition.model.Image;
import com.amazonaws.services.rekognition.model.BoundingBox;
import com.amazonaws.services.rekognition.model.CompareFacesMatch;
import com.amazonaws.services.rekognition.model.CompareFacesRequest;
import com.amazonaws.services.rekognition.model.CompareFacesResult;
import com.amazonaws.services.rekognition.model.ComparedFace;
import java.util.List;
import com.amazonaws.services.rekognition.model.S3Object;
public class CompareFacesBucket {
public static void main(String[] args) throws Exception{
Float similarityThreshold = 70F;
AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.standard()
.withRegion(Regions.US_WEST_2)
.build();
String reference = "Lam1.jpg";
String target = "Lam2.jpg";
String bucket = "<MyBucket>";
CompareFacesRequest request = new CompareFacesRequest()
.withSourceImage(new Image().withS3Object(new S3Object()
.withName(reference).withBucket(bucket)))
.withTargetImage(new Image().withS3Object(new S3Object()
.withName(target).withBucket(bucket)))
.withSimilarityThreshold(similarityThreshold);
// Call operation
CompareFacesResult compareFacesResult = rekognitionClient.compareFaces(request);
// Display results
List <CompareFacesMatch> faceDetails = compareFacesResult.getFaceMatches();
for (CompareFacesMatch match: faceDetails){
ComparedFace face= match.getFace();
BoundingBox position = face.getBoundingBox();
System.out.println("Face at " + position.getLeft().toString()
+ " " + position.getTop()
+ " matches with " + face.getConfidence().toString()
+ "% confidence.");
}
List<ComparedFace> uncompared = compareFacesResult.getUnmatchedFaces();
System.out.println("There was " + uncompared.size()
+ " face(s) that did not match");
System.out.println("Source image rotation: " + compareFacesResult.getSourceImageOrientationCorrection());
System.out.println("target image rotation: " + compareFacesResult.getTargetImageOrientationCorrection());
}
}
Output:
The last thing that i can think of as my V1 code works with JPG images and yours does not is your JPG image files may be corrupt or something. I would like to test this code with your images.
The backend of my application makes a request to:
https://graph.facebook.com/v2.8/me?access_token=<firebase-access-token>&fields=id,name,first_name,birthday,email,picture.type(large){url}&format=json&method=get&pretty=0&suppress_http_code=1
I get a successful (200) response with the JSON data I expect and picture field as such:
"picture": {
"data": {
"url": "https://platform-lookaside.fbsbx.com/platform/profilepic/?asid=<asid>&height=200&width=200&ext=<ext>&hash=<hash>"
}
}
(where in place of <asid> and <ext>, there are numbers and <hash> is some alphanumeric string).
However, when I make a GET request to the platform-lookaside URL above, I get a 404 error.
It's been happening every time since my very first graph.facebook request for the same user. The very first one returned a platform-lookaside URL which pointed to a proper image (not sure if this is simply coincidence).
Is there something I'm doing wrong or is this likely a bug with the Facebook API?
FB currently seems to have issues with some CDNs and therefore your issue might be only temporary. You should also see missing/broken images on some places on fb dot com. Worst time to debug your issue :)
Try this code it worked for me
GraphRequest request = GraphRequest.newMeRequest(
AccessToken.getCurrentAccessToken(), new GraphRequest.GraphJSONObjectCallback() {
#Override
public void onCompleted(JSONObject object, GraphResponse response) {
// Insert your code here
try {
String name = object.getString("name");
String email = object.getString("email");
String last_name = object.getString("last_name");
String first_name = object.getString("first_name");
String middle_name = object.getString("middle_name");
String link = object.getString("link");
String picture = object.getJSONObject("picture").getJSONObject("data").getString("url");
Log.e("Email = ", " " + email);
Log.e("facebookLink = ", " " + link);
Log.e("name = ", " " + name);
Log.e("last_name = ", " " + last_name);
Log.e("first_name = ", " " + first_name);
Log.e("middle_name = ", " " + middle_name);
Log.e("pictureLink = ", " " + picture);
} catch (JSONException e) {
e.printStackTrace();
Log.e("Sttaaaaaaaaaaaaaaaaa", e.getMessage());
}
}
});
Bundle parameters = new Bundle();
parameters.putString("fields", "id,name,email,link,last_name,first_name,middle_name,picture");
request.setParameters(parameters);
request.executeAsync();
I followed the AWS Rekognition Developer Guide and wrote a stream processor using CreateStreamProcessor in Java.
import com.amazonaws.services.rekognition.AmazonRekognition;
import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
import com.amazonaws.services.rekognition.model.*;
public class StreamProcessor {
private String streamProcessorName;
private String kinesisVideoStreamArn;
private String kinesisDataStreamArn;
private String roleArn;
private String collectionId;
private float matchThreshold;
private AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();
public void createStreamProcessor() {
KinesisVideoStream kinesisVideoStream = new KinesisVideoStream().withArn(kinesisVideoStreamArn);
StreamProcessorInput streamProcessorInput = new StreamProcessorInput().withKinesisVideoStream(kinesisVideoStream);
KinesisDataStream kinesisDataStream = new KinesisDataStream().withArn(kinesisDataStreamArn);
StreamProcessorOutput streamProcessorOutput = new StreamProcessorOutput().withKinesisDataStream(kinesisDataStream);
FaceSearchSettings faceSearchSettings = new FaceSearchSettings().withCollectionId(collectionId)
.withFaceMatchThreshold(matchThreshold);
StreamProcessorSettings streamProcessorSettings = new StreamProcessorSettings().withFaceSearch(faceSearchSettings);
CreateStreamProcessorResult createStreamProcessorResult = rekognitionClient.createStreamProcessor(
new CreateStreamProcessorRequest().withInput(streamProcessorInput).withOutput(streamProcessorOutput)
.withSettings(streamProcessorSettings).withRoleArn(roleArn).withName(streamProcessorName));
System.out.println("StreamProcessorArn - " +
createStreamProcessorResult.getStreamProcessorArn());
}
public void startStreamProcessor() {
StartStreamProcessorResult startStreamProcessorResult = rekognitionClient.startStreamProcessor(
new StartStreamProcessorRequest().withName(streamProcessorName));
}
public void stopStreamProcessorSample() {
StopStreamProcessorResult stopStreamProcessorResult = rekognitionClient.stopStreamProcessor(
new StopStreamProcessorRequest().withName(streamProcessorName));
}
public void deleteStreamProcessorSample() {
DeleteStreamProcessorResult deleteStreamProcessorResult = rekognitionClient.deleteStreamProcessor(
new DeleteStreamProcessorRequest().withName(streamProcessorName));
}
public void describeStreamProcessorSample() {
DescribeStreamProcessorResult describeStreamProcessorResult = rekognitionClient.describeStreamProcessor(
new DescribeStreamProcessorRequest().withName(streamProcessorName));
System.out.println("Arn - " + describeStreamProcessorResult.getStreamProcessorArn());
System.out.println("Input kinesisVideo stream - " + describeStreamProcessorResult.getInput()
.getKinesisVideoStream().getArn());
System.out.println("Output kinesisData stream - " + describeStreamProcessorResult.getOutput()
.getKinesisDataStream().getArn());
System.out.println("RoleArn - " + describeStreamProcessorResult.getRoleArn());
System.out.println("CollectionId - " + describeStreamProcessorResult.getSettings().getFaceSearch()
.getCollectionId());
System.out.println("Status - " + describeStreamProcessorResult.getStatus());
System.out.println("Status message - " + describeStreamProcessorResult.getStatusMessage());
System.out.println("Creation timestamp - " + describeStreamProcessorResult.getCreationTimestamp());
System.out.println("Last updatClient rekognitionClient = new AmazonRekognitionClient()e timestamp - "
+ describeStreamProcessorResult.getLastUpdateTimestamp());
}
public void listStreamProcessorSample() {
ListStreamProcessorsResult listStreamProcessorsResult = rekognitionClient.listStreamProcessors(
new ListStreamProcessorsRequest().withMaxResults(100));
for (com.amazonaws.services.rekognition.model.StreamProcessor streamProcessor :
listStreamProcessorsResult.getStreamProcessors()) {
System.out.println("StreamProcessor name - " + streamProcessor.getName());
System.out.println("Status - " + streamProcessor.getStatus());
}
}
}
But I can't figure out how to start the stream processor? Do I have to simply write the main method and call createStreamProcessor() function? Or do I have to do something else: like the guide mentioned something as StartStreamProcessor?
Yes, you have to start the stream preprocessor by using following API:
https://docs.aws.amazon.com/rekognition/latest/dg/API_StartStreamProcessor.html
rekognitionClient.startStreamProcessor(name= "my-stream-1")
I have a string that looks like a comma separate list of "label:value" items.
package testParsers
import org.scalatest.{Matchers, FlatSpec}
class testReturnStrParser extends FlatSpec with Matchers{
import parsers.ReturnStringParser
"return string parser" should "find the height in ret string" in {
val teststr = "blahblah:123, height:80.3"
val s = ReturnStringParser.findVal("height", teststr)
s should have length 1
s.head shouldEqual ("80.3")
}
it should "work if it is in the middle" in {
val teststr = "blahblah:123, height:80.3,weight:100.0"
val s = ReturnStringParser.findVal("height", teststr)
s should have length 1
s.head shouldEqual ("80.3")
}
}
I am trying to make the class work when the label height is in the middle:
package parsers
object ReturnStringParser {
def findVal(fieldName: String, s: String) = {
val rx = s"(?<=$fieldName:)"+"(.*)*[^,\\s]*"
(rx.r)
.findAllIn(s)
.toList
}
}
This works:
val rx = s"(?<=$fieldName:)"+"([^,]*)"
https://regex101.com/r/aC4vA3/1
I am using following code for matrix param in web services using jersey.
All variables in the method getFromMatrixParam() are getting value from the uri except color variable which is supposed to get value from matrix param in uri.
Can some one explain me why it is not working
import javax.ws.rs.GET;
import javax.ws.rs.MatrixParam;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
#Path("/cars")
public class CarResource {
#GET
#Path("/matrix/{car}/{model}/{year}")
#Produces("text/plain")
public String getFromMatrixParam(#PathParam("car") String car, #PathParam("model") String model, #MatrixParam("color") String color, #PathParam("year") String year) {
return "A " + color + " " + car + " " + model + " of " + year + " made";
}
}
URI used to invoke the above method is as below :
http://localhost:8080/Webservice_Restful_BBCH05d_Matrix_Parameter/webapi/cars/matrix/honda/city;color=black/2015
The value of color is coming as null.
The output of the above uri is :
"A null honda city of 2015 made"
Can some explain me why matrix param is not getting populated with proper value from uri ?
#Matrix Param is, currently, not work except in last path.
You can get with like this
public String getFromMatrixParam(
...,
#PathSegment("city") cityPath,
...) {
String color = cityPath.getMatrixParameters().get("color");
}