i tried to create httpprovider in eclipe and when run in wowza media server it does not load properly it returns only wowza server version.
code of eclipse is here
package com.domain.appname;
import java.io.IOException;
import java.io.OutputStream;
import com.wowza.wms.vhost.IVHost;
import com.wowza.wms.http.HTTProvider2Base;
import com.wowza.wms.http.IHTTPRequest;
import com.wowza.wms.http.IHTTPResponse;
public class CreateApp extends HTTProvider2Base {
public void onHTTPRequest(IVHost inVhost, IHTTPRequest req, IHTTPResponse resp){
String ret = req.getQueryString();
resp.setHeader("Content-Type", "text/xml");
OutputStream out = resp.getOutputStream();
byte[] outBytes = ret.toString().getBytes();
try {
out.write(outBytes);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
and also set vhost file as
<HTTPProvider>
<BaseClass>com.domain.appname.CreateApp</BaseClass>
<RequestFilters>CreateProducerApp*</RequestFilters>
<AuthenticationMethod>none</AuthenticationMethod>
</HTTPProvider>
Please help
Without seeing your complete vhost.xml, my guess would be that you put your new HTTPProvider last in the list of HTTPProviders.
When the server processes http requests it starts at the first one and tries each one. The RequestFilter for the provider that returns the server info is "*" which means no providers after this one will get called. This is usually the last one. Do make sure yours is before this one.
Please put your HTTPProvider entry in vHosts file before following block:
<HTTPProvider>
<BaseClass>com.wowza.wms.http.HTTPServerVersion</BaseClass>
<RequestFilters>*</RequestFilters>
<AuthenticationMethod>none</AuthenticationMethod>
</HTTPProvider>
Otherwise all of your request will give Wowza version details.
So, your final vhost file's end should look like:
<HTTPProvider>
<BaseClass>com.domain.appname.CreateApp</BaseClass>
<RequestFilters>CreateProducerApp*</RequestFilters>
<AuthenticationMethod>none</AuthenticationMethod>
</HTTPProvider>
<HTTPProvider>
<BaseClass>com.wowza.wms.http.HTTPServerVersion</BaseClass>
<RequestFilters>*</RequestFilters>
<AuthenticationMethod>none</AuthenticationMethod>
</HTTPProvider>
Related
I have the following need. I would like every day at 5:00 am to restart my instance, so I read the best way to automate it would be using Cloud Scheduler and Cloud Function, but I am not aware of these two GCP features.
I created two schedules in the Cloud Scheduler where my VM instance STOP at 5:00 am and another one that START at 5:10 am, but I don't know how to proceed in Cloud Function to end my process.
Could someone help me with this? hug to everyone!
See how are my project error log when trying to implement
ERROR
{ "jobName": "projects/my-project/locations/us-central1/jobs/Stop", "#type": "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished", "status": "INTERNAL", "url": "https://us-central1-my-project.cloudfunctions.net/power/stop?zone=us-central1-a&instance=my-instance", "targetType": "HTTP" }
###############
{
insertId: "1klx7n3g18eq5zs"
jsonPayload: {
jobName: "projects/my-project/locations/us-central1/jobs/Stop"
#type: "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"
status: "INTERNAL"
url: "https://us-central1-my-project.cloudfunctions.net/power/stop?zone=us-central1-a&instance=my-instance"
targetType: "HTTP"
}
httpRequest: {
status: 500
}
resource: {
type: "cloud_scheduler_job"
labels: {
location: "us-central1"
project_id: "my-project"
job_id: "Stop"
}
}
timestamp: "2020-08-07T08:00:06.896367090Z"
severity: "ERROR"
logName: "projects/my-project/logs/cloudscheduler.googleapis.com%2Fexecutions"
receiveTimestamp: "2020-08-07T08:00:06.896367090Z"
}
I have the same use case on my project, and I use this python function in order to start and stop a vm, my function can handle the paths start and stop
from flask import Flask, request, abort
import os
import logging
app = Flask(__name__)
#app.route('/')
#app.route('/<path:path>')
def power(path=None):
#this libraries are mandatory to reach compute engine api
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
#the function will take the service account of your function
credentials = GoogleCredentials.get_application_default()
#this line is to specify the api that we gonna use, in this case compute engine
service = discovery.build('compute', 'v1', credentials=credentials, cache_discovery=False)
logging.getLogger('googleapiclient.discovery_cache').setLevel(logging.ERROR)
# Project ID for this request.
project = yourprojectID # Update placeholder value.
zone = request.args.get('zone', None)
instance = request.args.get('instance', None)
#call the method to start or stop the instance
if (request.path == "/stop"):
req = service.instances().stop(project=project, zone=zone, instance=instance)
elif (request.path == "/start"):
req = service.instances().start(project=project, zone=zone, instance=instance)
else:
abort(418)
#execute the command
response = req.execute()
print(response)
return ("OK")
if __name__ == '__main__':
app.run(port=3000, debug=True)
requirements.txt file
google-api-python-client
oauth2client
flask
Scheduler config
Create a service account with functions.invoker permission within your function
Create new Cloud scheduler job
Specify the frequency in cron format.
Specify HTTP as the target type.
Add the URL of your cloud function and method as always.
Select the token OIDC from the Auth header dropdown
Add the service account email in the Service account text box.
In audience field you must only need to write the URL of the function without any additional parameter
On cloud scheduler, I hit my function by using these URLs
https://us-central1-yourprojectID.cloudfunctions.net/power/stop?zone=us-central1-a&instance=instance-1
https://us-central1-yourprojectID.cloudfunctions.net/power/stop?zone=us-central1-a&instance=instance-1
and I used this audience
https://us-central1-yourprojectID.cloudfunctions.net/power
please replace yourprojectID in the code and in the URLs
us-central1 is the region where my function is located, power is the name of my function, us-central1-a is the zone where my instance is located and instance-1 is the name of my instance
update for java
Cloud functions supports java11 and maven but this is on beta
first you need this dependencies, please add the following lines to your pom.xml file:
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-compute</artifactId>
<version>beta-rev20200629-1.30.10</version>
</dependency>
<dependency>
<groupId>com.google.api-client</groupId>
<artifactId>google-api-client</artifactId>
<version>1.30.10</version>
</dependency>
Cloud function to START a VM
package com.example;
import com.google.cloud.functions.HttpFunction;
import com.google.cloud.functions.HttpRequest;
import com.google.cloud.functions.HttpResponse;
import java.io.BufferedWriter;
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.compute.Compute;
import com.google.api.services.compute.model.Operation;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Arrays;
public class Example implements HttpFunction {
#Override
public void service(HttpRequest request, HttpResponse response) throws Exception, IOException, GeneralSecurityException {
// Project ID for this request.
String project = "my-project"; // TODO: Update placeholder value.
// The name of the zone for this request.
String zone = "my-zone"; // TODO: Update placeholder value.
// Name of the instance resource to start.
String instance = "my-instance"; // TODO: Update placeholder value.
Compute computeService = createComputeService();
// you can change the method start to stop
Compute.Instances.Start request = computeService.instances().start(project, zone, instance);
Operation xresponse = xrequest.execute();
// TODO: Change code below to process the `response` object:
System.out.println(xresponse);
BufferedWriter writer = response.getWriter();
writer.write("Done");
}
public static Compute createComputeService() throws IOException, GeneralSecurityException {
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
if (credential.createScopedRequired()) {
credential =
credential.createScoped(Arrays.asList("https://www.googleapis.com/auth/cloud-platform"));
}
return new Compute.Builder(httpTransport, jsonFactory, credential)
.setApplicationName("Google-ComputeSample/0.1")
.build();
}
}
For more information you can check this document in this document are some examples in different programming languages
I need to install the RediSearch module on top of a GCP memorystore redis instance.
I followed the steps:
docker run -p 6379:6379 redislabs/redisearch:latest
I pushed this docker image to a Kubernetes cluster and exposed the external IP. I used that external IP and the 6379 port as configuration for my application but I'm not able to connect to RediSearch.
code:
import java.io.IOException;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.options.Default;
import org.apache.beam.sdk.options.Description;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import io.redisearch.client.Client;
import io.redisearch.*;
public class RediSearch {
static Client client = new Client("testdoc1", "clusteripaddress", 8097);
private static final Logger LOG = LoggerFactory.getLogger(RediSearch.class);
public interface Options extends PipelineOptions {
#Description("gcp project id.")
#Default.String("XXXX")
String getProjectId();
void setProjectId(String projectId);
}
public static PipelineResult run(Options options) throws IOException {
Pipeline pipeline = Pipeline.create(options);
pipeline.apply(Create.of("test"))
.apply(ParDo.of(new DoFn<String, String>() {
private static final long serialVersionUID = 1L;
#ProcessElement
public void processElement(ProcessContext c) throws Exception {
String pubsubmsg = c.element();
Schema sc = new Schema()
.addTextField("title", 5.0)
.addTextField("body", 1.0)
.addNumericField("price");
client.createIndex(sc, Client.IndexOptions.Default());
Map<String, Object> fields = new HashMap<String, Object>();
fields.put("title", "hello world");
fields.put("body", "lorem ipsum");
fields.put("price", 800);
fields.put("price", 1337);
fields.put("price", 2000);
client.addDocument("searchdoc3", fields);
SearchResult[] res = client.searchBatch(new Query("hello world").limit(0, 5).setWithScores());
for (Document d : res[0].docs) {
LOG.info("redisearchlog{}",d.getId().startsWith("search"));
LOG.info("redisearchlog1{}",d.getProperties());
LOG.info("redisearchlog2{}",d.toString());
}
}
}));
return pipeline.run();
}
public static void main(String[] args) throws IOException {
Options options = PipelineOptionsFactory.fromArgs(args).as(Options.class);
run(options);
}
}
Error :
redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at redis.clients.jedis.util.Pool.getResource(Pool.java:59)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:15)
at io.redisearch.client.Client._conn(Client.java:137)
at io.redisearch.client.Client.getAllConfig(Client.java:275)
at com.testing.redisearch.RediSearch$1.processElement(RediSearch.java:59)
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed connecting to host xxxxxxxxxxx:6379
at redis.clients.jedis.Connection.connect(Connection.java:204)
at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:100)
at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:1894)
at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:117)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:889)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:424)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:349)
at redis.clients.jedis.util.Pool.getResource(Pool.java:50)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234)
at redis.clients.jedis.JedisPool.getResource(JedisPool.java:15)
at io.redisearch.client.Client._conn(Client.java:137)
at io.redisearch.client.Client.getAllConfig(Client.java:275)
at com.testing.redisearch.RediSearch$1.processElement(RediSearch.java:59)
at com.testing.redisearch.RediSearch$1$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:218)
at org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:183)
at org.apache.beam.runners.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:335)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:44)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:201)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:411)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:380)
at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:305)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at redis.clients.jedis.Connection.connect(Connection.java:181)
... 31 more
Any solution is appreciated.
There are multiple causes as per the error JedisConnectionException: Could not get a resource from the pool. According to the answers in this question the problem is that the connection to RediSearch couldn't be established, be it because Redis is not running, the connection times out or it cannot be allocated.
Regardless, I have noticed that even though you deploy Redis on port 6379, in your code you are trying to access it on port 8097. Please change your Client declaration to the following and retry the connection.
static Client client = new Client("testdoc1", "<cluster_ip_address>", 6379);
If you are looking to have the RediSearch module in your memorystore instance, it appears that may not be supported yet. You can see in the google cloud docs here that at the time of writing this, even for version 5.0, redis modules are not supported.
I am using asterisk and via extensions.conf I have to send voicemail to mail by using python script.
Python script is running fine but I don't have idea how to use that with extensions.
SMTP code is working fine.
context are below-
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.base import MIMEBase
from email import encoders
fromaddr = "from"
toaddr = "to"
As per the comments in your questions, seems you already have it working with Asterisk built-in, so you should have a valid reason to process outside, being this the case, maybe you could use Asterisk System application to call the script from the dialplan or externnotify in voicemail.conf to call the Python script which will receive (need to test) as parameters: context, extension, new voicemails, old voicemails, urgent voicemails.
I have a Java EE server/client architecture which communicate with each other by using SSL connection. When the connection is made, the client can interrogate the server web services. My question is how can I access to client certificate information in the server web service ? My server controller below :
import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
#Path("mycontroller")
#Consumes(MediaType.APPLICATION_XML)
#Produces(MediaType.APPLICATION_XML)
public class Controller {
#GET
#Path("dosomething")
public Response doSomething() {
// How can I have access to certificate information here ?
return Response.ok().build();
}
}
I found a way to do what I wanted.
First, the server has to be configurated to require client certificate authentication. In my case I use a JBoss server and had to add this in the standalone.xml file :
...
<subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" native="false">
...
<connector name="https" protocol="HTTP/1.1" scheme="https" socket-binding="https" enable-lookups="false" secure="true">
<ssl name="localhost" key-alias="localhost" password="server" certificate-file="${jboss.server.config.dir}/server.jks" certificate-key-file="${jboss.server.config.dir}/server.jks" ca-certificate-file="${jboss.server.config.dir}/truststore.jks" protocol="TLSv1" verify-client="true" />
</connector>
...
</subsystem>
...
And then in my controller I had to inject HttpServletRequest and finally I could obtain an instance of X509Certificate which contains the certificate information :
import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.servlet.http.HttpServletRequest;
import java.security.cert.X509Certificate;
#Path("mycontroller")
#Consumes(MediaType.APPLICATION_XML)
#Produces(MediaType.APPLICATION_XML)
public class Controller {
#Context
private HttpServletRequest request;
#GET
#Path("dosomething")
public Response doSomething() {
X509Certificate[] certChain = (X509Certificate[]) request.getAttribute("javax.servlet.request.X509Certificate");
X509Certificate certificate = certChain[0];
return Response.ok().build();
}
}
If you are looking for the standard certificate information that would be found in the HTTP Headers and the HTTP Servlet Reqeust object, such as Client certificate information from Apache HTTP reverse proxy. You can inject these
For Example:
#Context private HttpServletRequest servletRequest;
#Context private HttpServletContext servletContext;
( see Get HttpServletRequest in Jax Rs / Appfuse application? or in the Java EE tutorial )
If you wish to access the keystore file and load the private key of the certificate, then file access should be done through a JNDI file resource or a JCA adaptor.
But i would advise caution, the application server should handle all the SSL/TLS connection security your WAR component just declares that it wants the connection to be "confidential" in the web.xml file. Mixing the message level security and authentication with the applicaition or transport protocol security can break separation of concerns. i.e. keeping authentication attached to the message in a bus or hub scenario.
We are developing a basic game for android phones and have recently switched from Eclipse IDE to Android Studios. With the switch, I was forced to move from aws-java-sdk-1.9.30 to aws-android-sdk-2.2.0.
I have attempted to update the AWS code and it is now compiling, however I have come across an issue while creating the AmazonDynamoDBClient.
I am getting this runtime error:
Exception in thread "main" java.lang.IllegalArgumentException: no HostnameVerifier specified
I'm not sure if I am missing a step somewhere. If anyone can help shed some light on what may be causing the issue, I will be very thankful!
On a related note, most of the examples I have been able to find, and the examples on which I based my initial code, seem to be for the aws-java-sdk-1.9.30 jars. If anyone knows of where I can find examples that are suited for the aws-android-sdk-2.2.0 jars, it would help immensely!
Here is the entire stack trace as requested:
CLIENT:com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient#5ef04b5
Creating Match Details...
Exception in thread "main" java.lang.IllegalArgumentException: no HostnameVerifier specified
at javax.net.ssl.HttpsURLConnection.setHostnameVerifier(HttpsURLConnection.java:265)
at com.amazonaws.http.UrlHttpClient.configureConnection(UrlHttpClient.java:169)
at com.amazonaws.http.UrlHttpClient.createConnection(UrlHttpClient.java:105)
at com.amazonaws.http.UrlHttpClient.execute(UrlHttpClient.java:60)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:361)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:211)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2930)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.query(AmazonDynamoDBClient.java:1240)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper.query(DynamoDBMapper.java:2181)
at com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper.query(DynamoDBMapper.java:2137)
at com.towerfield.aws.MatchDetails.getMatchIds(MatchDetails.java:201)
at com.towerfield.aws.MatchDetails.<init>(MatchDetails.java:109)
at com.towerfield.aws.MatchDetails.main(MatchDetails.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Process finished with exit code 1
Here is where the exception is thrown (inside HTTPSURLConnection.java):
public void setHostnameVerifier(HostnameVerifier v)
{
if (v == null)
{
throw new IllegalArgumentException("HostnameVerifier is null");
}
hostnameVerifier = v;
}
Here is the relevant code which seems to be causing the runtime error:
static AmazonDynamoDBClient client;
...
BasicAWSCredentials credentials = new BasicAWSCredentials("KEY","SECRETKEY");
client = new AmazonDynamoDBClient(credentials);
...
DynamoDBMapper mapper = new DynamoDBMapper(client);
...
List<PlayersListOfActiveMatches> latestReplies = mapper.query(PlayersListOfActiveMatches.class, queryExpression);
Here is a list of my imports as was requested:
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBAttribute;
import com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBHashKey;
import com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBMapper;
import com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBQueryExpression;
import com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBRangeKey;
import com.amazonaws.mobileconnectors.dynamodbv2.dynamodbmapper.DynamoDBTable;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient;
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
import com.amazonaws.services.dynamodbv2.model.Condition;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
DynamoDB examples for the AWS SDK for Android are available in the AWS documentation.