I have an Amazon EC2 with Linux Instance set up and running for my Java Web Application to consume REST requests. The problem is that I am trying to use Google Cloud Vision in this application to recognize violence/nudity in users pictures.
Accessing the EC2 in my Terminal, I set the GOOGLE_APPLICATION_CREDENTIALS by the following command, which I found in the documentation:
export GOOGLE_APPLICATION_CREDENTIALS=<my_json_path.json>
Here comes my first problem: When I restart my server, and ran 'echo $GOOGLE_APPLICATION_CREDENTIALS' the variable is gone. Ok, I set it to the bash_profile and bashrc and now it is ok.
But, when I ran my application, consuming the above code, to get the adult and violence status of my picture, I got the following error:
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
My code is the following:
Controller:
if(SafeSearchDetection.isSafe(user.getId())) {
if(UserDB.updateUserProfile(user)==false){
throw new SQLException("Failed to Update");
}
} else {
throw new IOException("Explicit Content");
}
SafeSearchDetection.isSafe(int idUser):
String path = IMAGES_PATH + idUser + ".jpg";
try {
mAdultMedicalViolence = detectSafeSearch(path);
if(mAdultMedicalViolence.get(0) > 3)
return false;
else if(mAdultMedicalViolence.get(1) > 3)
return false;
else if(mAdultMedicalViolence.get(2) > 3)
return false;
} catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
return true;
detectSafeSearch(String path):
List<AnnotateImageRequest> requests = new ArrayList<AnnotateImageRequest>();
ArrayList<Integer> adultMedicalViolence = new ArrayList<Integer>();
ByteString imgBytes = ByteString.readFrom(new FileInputStream(path));
Image img = Image.newBuilder().setContent(imgBytes).build();
Feature feat = Feature.newBuilder().setType(Type.SAFE_SEARCH_DETECTION).build();
AnnotateImageRequest request = AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
requests.add(request);
ImageAnnotatorClient client = ImageAnnotatorClient.create();
BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
List<AnnotateImageResponse> responses = response.getResponsesList();
for (AnnotateImageResponse res : responses) {
if (res.hasError()) {
System.out.println("Error: "+res.getError().getMessage()+"\n");
return null;
}
SafeSearchAnnotation annotation = res.getSafeSearchAnnotation();
adultMedicalViolence.add(annotation.getAdultValue());
adultMedicalViolence.add(annotation.getMedicalValue());
adultMedicalViolence.add(annotation.getViolenceValue());
}
for(int content : adultMedicalViolence)
System.out.println(content + "\n");
return adultMedicalViolence;
My REST application was built above a Tomcat8. After no success running the command:
System.getenv("GOOGLE_APPLICATION_CREDENTIALS")
I realize that my problem was in the Environment Variables to Tomcat installation. To correct this, I just created a new file setenv.sh in my /bin with the content:
GOOGLE_APPLICATION_CREDENTIALS=<my_json_path.json>
And it worked!
Related
I have google app engine project taht i finish upgrade from java 8 to java 11.
I also upgraded TaskQueque to cloudTask and every working but how i can run cloudTask on localHost?
this is my old code
Queue queue = QueueFactory.getDefaultQueue();
TaskOptions taskOptions = TaskOptions.Builder.withUrl("/" + backgroundTaskParams.getUrl());
taskOptions.retryOptions(RetryOptions.Builder.withTaskRetryLimit(backgroundTaskParams.getRetryLimit()));
taskOptions.countdownMillis(backgroundTaskParams.getStartDelay());
Map<String, String> params = backgroundTaskParams.getParams();
if(params != null){
for (String key : params.keySet()) {
taskOptions.param(key, params.get(key));
}
}
TaskHandle taskHandle = queue.add(taskOptions);
is not working on java 11 standard gcp
so I upgrade to this code
try (CloudTasksClient client = CloudTasksClient.create()) {
RetryConfig retyConfig = RetryConfig.builder()
.setMaxRetries(backgroundTaskParams.getRetryLimit())
.build();
String queueName = QueueName.of(EMF.getProjectId(), "us-central1", "default").toString();
String payload = Utilities.buildUrlParams(backgroundTaskParams.getParams());
long startTime = (ApplicationServicesManager.getInstance().getTimeManager().getCurrentTime() + backgroundTaskParams.getStartDelay()) / 1000;
Task.Builder taskBuilder = Task.newBuilder();
taskBuilder.setAppEngineHttpRequest(
AppEngineHttpRequest.newBuilder()
.setRelativeUri("/" + backgroundTaskParams.getUrl())
.setHttpMethod(HttpMethod.POST)
.setBody(ByteString.copyFrom(payload, Charset.defaultCharset()))
.build());
taskBuilder.setScheduleTime(
Timestamp.newBuilder()
.setSeconds(startTime));
taskBuilder.setDispatchCount(1);
Task task = taskBuilder.build();
Task taskResponse = client.createTask(queueName, task);
}
//
catch (Exception e) {
NotificationsManager.sendLogException(e);
}
the new code working fine in production server but when I run in localhost
the code create task in production server so i cannot debugging and is not good :)
so how I can to start task in localHost
thank you
This is for embed jetty. I am trying to clean the jetty working directory which is automatically created in the /tmp folder inside the container. I did write the below method-"cleanJettyWorkingDirectory()" logic to clean the working directory and it works. The problem here is, it is not allowing me to create a working directory now because think I am calling this method from the wrong place. Whenever I am restarting the docker, it is cleaning the entire working directory Please assist.
public void cleanJettyWorkingDirectory(){
final File folder = new File(JETTY_WORKING_DIRECTORY);
final File[] files = folder.listFiles( new FilenameFilter() {
#Override
public boolean accept( final File dir,
final String name ) {
return name.matches( "jetty-0_0_0_0-.*" );
}
} );
for ( final File file : files ) {
try {
FileUtils.deleteDirectory(file);
} catch (IOException e) {
logger.info("Unable to delete the Jetty working directory");
}
}
}
Jetty Service Main class file as below:
public class JettyServer {
private final static Logger logger = Logger.getLogger(JettyServer.class.getName());
private static final int JETTY_PORT = 10000;
private static final String JETTY_REALM_PROPERTIES_FILE_NAME = "realm.properties";
private static final String JETTY_REALM_NAME = "myrealm";
private static final String JETTY_WORKING_DIRECTORY="tmp";
public static QueuedThreadPool threadPool;
public JettyServer() {
try {
cleanJettyWorkingDirectory(); // *Calling here*
RolloverFileOutputStream os = new RolloverFileOutputStream(JETTY_STDOUT_LOG_FILE_NAME, true);
PrintStream logStream = new PrintStream(os);
System.setOut(logStream);
System.setErr(logStream);
Server server = new Server(JETTY_PORT);
server.addBean(getLoginService());
try {
logger.info("Configuring Jetty SSL..");
HttpConfiguration http_config = new HttpConfiguration();
http_config.setSecureScheme("https");
http_config.setSecurePort(JETTY_PORT);
https.setPort(JETTY_PORT);
server.setConnectors(new Connector[]{https});
logger.info("Jetty SSL successfully configured..");
} catch (Exception e){
logger.severe("Error configuring Jetty SSL.."+e);
throw e;
}
Configuration.ClassList classlist = Configuration.ClassList.setServerDefault(server);
classlist.addAfter("org.eclipse.jetty.webapp.FragmentConfiguration",
"org.eclipse.jetty.plus.webapp.EnvConfiguration",
"org.eclipse.jetty.plus.webapp.PlusConfiguration");
//register ui and service web apps
HandlerCollection webAppHandlers = getWebAppHandlers();
for (Connector c : server.getConnectors()) {
c.getConnectionFactory(HttpConnectionFactory.class).getHttpConfiguration().setRequestHeaderSize(MAX_REQUEST_HEADER_SIZE);
c.getConnectionFactory(HttpConnectionFactory.class).getHttpConfiguration().setSendServerVersion(false);
}
threadPool = (QueuedThreadPool) server.getThreadPool();
// request logs
RequestLogHandler requestLogHandler = new RequestLogHandler();
AsyncRequestLogWriter asyncRequestLogWriter = new AsyncRequestLogWriter(JETTY_REQUEST_LOG_FILE_NAME);
asyncRequestLogWriter.setFilenameDateFormat(JETTY_REQUEST_LOG_FILE_NAME_DATE_FORMAT);
asyncRequestLogWriter.setAppend(JETTY_REQUEST_LOG_FILE_APPEND);
asyncRequestLogWriter.setRetainDays(JETTY_REQUEST_LOG_FILE_RETAIN_DAYS);
asyncRequestLogWriter.setTimeZone(TimeZone.getDefault().getID());
requestLogHandler.setRequestLog(new AppShellCustomRequestLog(asyncRequestLogWriter));
webAppHandlers.addHandler(requestLogHandler);
StatisticsHandler statisticsHandler = new StatisticsHandler();
statisticsHandler.setHandler(new AppshellStatisticsHandler());
webAppHandlers.addHandler(statisticsHandler);
// set handler
server.setHandler(webAppHandlers);
//start jettyMetricsPsr
JettyMetricStatistics.logJettyMetrics();
// set error handler
server.addBean(new CustomErrorHandler());
// GZip Handler
GzipHandler gzip = new GzipHandler();
server.setHandler(gzip);
gzip.setHandler(webAppHandlers);
//setting server attribute for datasources
server.setAttribute("fawappshellDS", new Resource(JNDI_NAME_FAWAPPSHELL, DatasourceUtil.getFawAppshellDatasource()));
server.setAttribute("fawcommonDS", new Resource(JNDI_NAME_FAWCOMMON, DatasourceUtil.getCommonDatasource()));
//new Resource(server, JNDI_NAME_FAWAPPSHELL, getFawAppshellDatasource());
//new Resource(server, JNDI_NAME_FAWCOMMON, getFawCommonDatasource());
server.start();
server.join();
} catch (Exception e) {
e.printStackTrace();
}
}
private HandlerCollection getWebAppHandlers() throws SQLException, NamingException{
//Setting the war and context path for the service layer: oaxservice
WebAppContext serviceWebapp = new WebAppContext();
serviceWebapp.setWar(APPSHELL_API_WAR_FILE_PATH);
serviceWebapp.setContextPath(APPSHELL_API_CONTEXT_PATH);
serviceWebapp.setPersistTempDirectory(false);
//setting the war and context path for the UI layer: oaxui
WebAppContext uiWebapp = new WebAppContext();
uiWebapp.setWar(APPSHELL_UI_WAR_FILE_PATH);
uiWebapp.setContextPath(APPSHELL_UI_CONTEXT_PATH);
uiWebapp.setAllowNullPathInfo(true);
uiWebapp.setInitParameter("org.eclipse.jetty.servlet.Default.dirAllowed", "false");
//set error page handler for the UI context
uiWebapp.setErrorHandler(new CustomErrorHandler());
//handling the multiple war files using HandlerCollection.
HandlerCollection handlerCollection = new HandlerCollection();
handlerCollection.setHandlers(new Handler[]{serviceWebapp, uiWebapp});
return handlerCollection;
}
public LoginService getLoginService() throws IOException {
URL realmProps = JettyServer.class.getClassLoader().getResource(JETTY_REALM_PROPERTIES_FILE_NAME);
if (realmProps == null)
throw new FileNotFoundException("Unable to find " + JETTY_REALM_PROPERTIES_FILE_NAME);
return new HashLoginService(JETTY_REALM_NAME, realmProps.toExternalForm());
}
public void cleanJettyWorkingDirectory(){
final File folder = new File(JETTY_WORKING_DIRECTORY);
final File[] files = folder.listFiles( new FilenameFilter() {
#Override
public boolean accept( final File dir,
final String name ) {
return name.matches( "jetty-0_0_0_0-.*" );
}
} );
for ( final File file : files ) {
try {
FileUtils.deleteDirectory(file);
} catch (IOException e) {
logger.info("Unable to delete the Jetty working directory");
}
}
}
public static void main(String[] args) {
new JettyServer();
}
}
Option 1: Use docker tmpfs
If you want to eliminate the system temp persistence, just use docker correctly to avoid it doing that between restarts, don't write this custom logic within your java app.
The docker tmpfs is probably going to be a better solution.
See past answer: https://stackoverflow.com/a/52662602/775715
Option 2: Use linux systemd tmpfiles
You could also use systemd-tmpfiles or systemd-tmpfiles-clean to perform the cleanup (periodically) automatically within the Linux environment within your docker image.
Option 3: Use a non-standard system temp directory for Jetty
Configure a new Temp Directory for your Java instance ...
$ java -Djava.io.tmpdir=/var/run/jetty/work/ -jar start.jar
Then use your shell script that starts your Jetty instance to just clear out that unique directory before you execute the java instance.
aka:
JETTY_WORK=/var/run/jetty/work
rm -rf $JETTY_WORK/*
java -Djava.io.tmpdir=$JETTY_WORK/ -jar start.jar
This approach also catches all Java temp directory usages from your 3rd party libraries as well, not just Jetty itself.
I cannot seem to update pre-existing UserAttributes on a user "73". I am not sure if this behaviour is to be expected.
Map<String, List<String>> userAttributes = new HashMap<>();
userAttributes.put("Inference", Arrays.asList("NEGATIVE"));
userAttributes.put("Gender", Arrays.asList("M"));
userAttributes.put("ChannelPreference", Arrays.asList("EMAIL"));
userAttributes.put("TwitterHandle", Arrays.asList("Nutter"));
userAttributes.put("Age", Arrays.asList("435"));
EndpointUser endpointUser = new EndpointUser().withUserId("73");
endpointUser.setUserAttributes(userAttributes);
EndpointRequest endpointRequest = new EndpointRequest().withUser(endpointUser);
UpdateEndpointResult updateEndpointResult = pinpoint.updateEndpoint(new UpdateEndpointRequest()
.withEndpointRequest(endpointRequest).withApplicationId("380c3902d4ds47bfb6f9c6749c6dc8bf").withEndpointId("a1fiy2gy+eghmsadj1vqew6+aa"));
System.out.println(updateEndpointResult.getMessageBody());
#David.Webster,
You can update user-attributes of Amazon Pinpoint endpoint using the below Java code snippet which I have tested to be working :
public static void main(String[] args) throws IOException {
// Try to update the endpoint.
try {
System.out.println("===============================================");
System.out.println("Getting Started with Amazon Pinpoint"
+"using the AWS SDK for Java...");
System.out.println("===============================================\n");
// Initializes the Amazon Pinpoint client.
AmazonPinpoint pinpointClient = AmazonPinpointClientBuilder.standard()
.withRegion(Regions.US_EAST_1).build();
// Creates a new user definition.
EndpointUser jackchan = new EndpointUser().withUserId("73");
// Assigns custom user attributes.
jackchan.addUserAttributesEntry("name", Arrays.asList("Jack", "Chan"));
jackchan.addUserAttributesEntry("Inference", Arrays.asList("NEGATIVE"));
jackchan.addUserAttributesEntry("ChannelPreference", Arrays.asList("EMAIL"));
jackchan.addUserAttributesEntry("TwitterHandle", Arrays.asList("Nutter"));
jackchan.addUserAttributesEntry("gender", Collections.singletonList("M"));
jackchan.addUserAttributesEntry("age", Collections.singletonList("435"));
// Adds the user definition to the EndpointRequest that is passed to the Amazon Pinpoint client.
EndpointRequest jackchanIphone = new EndpointRequest()
.withUser(jackchan);
// Updates the specified endpoint with Amazon Pinpoint.
UpdateEndpointResult result = pinpointClient.updateEndpoint(new UpdateEndpointRequest()
.withEndpointRequest(jackchanIphone)
.withApplicationId("4fd13a407f274f10b4ec06cbc71738bd")
.withEndpointId("095A8688-7D79-43CE-BDCE-7DF713332BC3"));
System.out.format("Update endpoint result: %s\n", result.getMessageBody().getMessage());
} catch (Exception ex) {
System.out.println("EndpointUpdate Failed");
System.err.println("Error message: " + ex.getMessage());
ex.printStackTrace();
}
}
}
Hope this helps
I am trying to publish a message on a topic using aws-java-sdk-iot using AWSIotDataClient's publish method on a tomcat server (7.0.70) on EC2 instance in us-east-1 region.
I tried with 2 different versions of the aws-java-sdk-iot (1.11.255 and 1.10.77)
Ref:
https://github.com/aws/aws-sdk-java/tree/master/aws-java-sdk-iot
When the publish method is called I see the CPU utilization goes to 80% and I see OutOfMemory Errors in my application logs and catalina.out logs.
My tomcat memory settings are -Xms2048m -Xmx2048m
It would be great if any one can point me in the right direction or if I am missing something when constructing AWSIoTDataClient or doing something wrong.
Thanks in Advance.
Below is the snippet of the code when using 1.11.255 version of aws-java-sdk-iot:
The code is in a singleton class: sendMessageToTpoic is the method which gets invoked to send the message on topic.
public Boolean sendMessageToTopic(String messagePayLoad, String topic) throws IoTPublishException{
Boolean result = false;
logger.info("sendMessageToTopic MessagePayload: "+messagePayLoad+", topic: "+topic);
try {
PublishRequest request = new PublishRequest();
ByteBuffer byteBufferMsg = stringToByteBuffer(messagePayLoad);
request.setPayload(byteBufferMsg);
request.setQos(1);
request.setSdkClientExecutionTimeout(10000);
request.setSdkRequestTimeout(10000);
request.setTopic(topic);
if(awsIotDataClient==null){
awsIotDataClient= config();
}
awsIotDataClient.publish(request);
logger.info("Publishing the request on the topic"+request);
result =true;
} catch (Exception e) {
logger.error("Error sending message to the aws-iot-gateway"+messagePayLoad,e);
throw new IoTPublishException(
"UNABLE_TO_POST_MESSAGE_IN_IOT_TOPIC",
"Unable to publish to topic: " + topic + ", for message: "
+ messagePayLoad, e);
}
logger.info("Exit sendMessageToTopic: result: "+result,",messagePayLoad: "+messagePayLoad+", topic:"+topic);
return result;
}
public AWSCredentialsProvider populateCreds() throws AmazonClientException{
AWSCredentials creds =null;
try {
if(instanceProvider==null){
try {
instanceProvider = new InstanceProfileCredentialsProvider(true);
creds = instanceProvider.getCredentials();
logger.debug("Creds : {} ",creds);
}catch (AmazonClientException e) {
logger.error("Error-AmazonClientException when loading AWS Creds from Instance Meta-Data ",e);
}catch (Exception e) {
logger.error("Error-GenericException when loading AWS Creds from Instance Meta-Data ",e);
}
if(instanceProvider ==null || creds==null){
profileProvider = new ProfileCredentialsProvider();
logger.info("Loaded creds from aws.credentials file in default directory");
}
}
} catch (Exception e) {
logger.error("Error-GenericException in populateCreds ",e);
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (~/.aws/credentials), and is in valid format.",
e);
}
return ( (instanceProvider!=null && creds!=null)?instanceProvider:profileProvider);
}
protected AWSIotDataClient config() throws Exception{
try {
//Loading Via populateCreds: InstanceCredentialsProvider or ProfileCredentialsProvider
awsIotDataClient = (AWSIotDataClient) AWSIotDataClientBuilder.standard()
.withCredentials(populateCreds())
.withRegion(Regions.US_EAST_1)
.build();
} catch (Exception e) {
logger.error("Error-GenericException creating the AWSIotDataClient in config: ",e);
throw e;
}
return awsIotDataClient;
}
Update: 02/12/2018: It was a tomcat maxpermsize issue but not with the AWS SDK.
Environment: C#, .Net 3.5, Sql Server 2005
I have a method that works in a stand-alone C# console application project. It creates an XMLElement from data in the database and uses a private method to send it to a web service on our local network. When run from VS in this test project, it runs in < 5 seconds.
I copied the class into a CLR project, built it, and installed it in SQL Server (WITH PERMISSION_SET = EXTERNAL_ACCESS). The only difference is the SqlContext.Pipe.Send() calls that I added for debugging.
I am testing it by using an EXECUTE command one stored procedure (in the CLR) from an SSMS query window. It never returns. When I stop execution of the call after a minute, the last thing displayed is "Calling GetResponse() using http://servername:53694/odata.svc/Customers/". Any ideas as to why the GetResponse() call doesn't return when executing within SQL Server?
private static string SendPost(XElement entry, SqlString url, SqlString entityName)
{
// Send the HTTP request
string serviceURL = url.ToString() + entityName.ToString() + "/";
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(serviceURL);
request.Method = "POST";
request.Accept = "application/atom+xml,application/xml";
request.ContentType = "application/atom+xml";
request.Timeout = 20000;
request.Proxy = null;
using (var writer = XmlWriter.Create(request.GetRequestStream()))
{
entry.WriteTo(writer);
}
try
{
SqlContext.Pipe.Send("Calling GetResponse() using " + request.RequestUri);
WebResponse response = request.GetResponse();
SqlContext.Pipe.Send("Back from GetResponse()");
/*
string feedData = string.Empty;
Stream stream = response.GetResponseStream();
using (StreamReader streamReader = new StreamReader(stream))
{
feedData = streamReader.ReadToEnd();
}
*/
HttpStatusCode StatusCode = ((HttpWebResponse)response).StatusCode;
response.Close();
if (StatusCode == HttpStatusCode.Created /* 201 */ )
{
return "Created # Location= " + response.Headers["Location"];
}
return "Creation failed; StatusCode=" + StatusCode.ToString();
}
catch (WebException ex)
{
return ex.Message.ToString();
}
finally
{
if (request != null)
request.Abort();
}
}
The problem turned out to be the creation of the request content from the XML. The original:
using (var writer = XmlWriter.Create(request.GetRequestStream()))
{
entry.WriteTo(writer);
}
The working replacement:
using (Stream requestStream = request.GetRequestStream())
{
using (var writer = XmlWriter.Create(requestStream))
{
entry.WriteTo(writer);
}
}
You need to dispose the WebResponse. Otherwise, after a few calls it goes to timeout.
You are asking for trouble doing this in the CLR. And you say you are calling this from a trigger? This belongs in the application tier.
Stuff like this is why when the CLR functionality came out, DBAs were very concerned about how it would be misused.