Lettuce API getting into periodic TimeoutException Issues - amazon-web-services

We have 10 node (5 Masters, 5 Read Replicas) AWS Redis Cluster. We use Lettuce API. We are using Lettuce API non pool configuration and Async calls. Almost once every week we get into issue where we get continuous TimoutExceptions for few minutes. We are expecting this as a network issue, but networking team has found no issue with network. What can be possible solution?
private LettuceConfigWithoutPool(RedisClusterConfig pool) {
if (lettuceConfigWithoutPoolInstance != null) {
throw new RuntimeException("Use getInstance() method to get the single instance of this class.");
}
List<RedisURI> redisURIS = new RedisURIBuilder.Builder(pool)
.withPassword()
.withTLS()
.build();
ClusterTopologyRefreshOptions clusterTopologyRefreshOptions = new ClusterTopologyBuilder.Builder(pool)
.setAdaptiveRefreshTriggersTimeoutInMinutes()
.build();
ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder()
.topologyRefreshOptions(clusterTopologyRefreshOptions)
.build();
RedisClusterClient redisClusterClient = ClusterClientProvider.buildClusterClient(redisURIS, clusterClientOptions);
StatefulRedisClusterConnection<String, Object> statefulRedisClusterConnection = redisClusterClient.connect(new SerializedObjectCodec());
statefulRedisClusterConnection.setReadFrom(ReadFromArgumentProvider.getReadFromArgument(pool.getReadFrom()));
this.command = statefulRedisClusterConnection.async();
}

Related

Spring Boot Video Streaming from S3 Bucket

I want to create a youtube like Video streaming application but in a small scale. I am using Spring boot for backend rest endpoints and amazon S3 bucket for storing video files. I am able to upload and download video files to S3 bucket. But I am confused in streaming side. I want to show those video files in jsp page to play. I heard about Aws video on demand, aws kinesis, etc. Can someone suggest me or share some link which will be the best approach to follow for video streaming with spring boot. Or is there any other service apart from aws services which can be useful in this scenario. I am totally confused. Please help me out. Thank you.
I have created a sample project for streaming the AWS s3 resources using spring boot.
You can set a controller with mapping as required.
For this demo code the endpoint is http://localhost:port/bucket_name/object_key
#RestController("/")
public class ApiController {
#Value("${aws.region}")
private String awsRegion;
#GetMapping(value = "/**", produces = { MediaType.APPLICATION_OCTET_STREAM_VALUE })
public ResponseEntity<StreamingResponseBody> getObject(HttpServletRequest request) {
try {
AmazonS3 s3client = AmazonS3ClientBuilder.standard().withRegion(awsRegion).build();
String uri = request.getRequestURI();
String uriParts[] = uri.split("/", 2)[1].split("/", 2);
String bucket = uriParts[0];
String key = uriParts[1];
System.out.println("Fetching " + uri);
S3Object object = s3client.getObject(bucket, key);
S3ObjectInputStream finalObject = object.getObjectContent();
final StreamingResponseBody body = outputStream -> {
int numberOfBytesToWrite = 0;
byte[] data = new byte[1024];
while ((numberOfBytesToWrite = finalObject.read(data, 0, data.length)) != -1) {
outputStream.write(data, 0, numberOfBytesToWrite);
}
finalObject.close();
};
return new ResponseEntity<StreamingResponseBody>(body, HttpStatus.OK);
} catch (Exception e) {
System.err.println("Error "+ e.getMessage());
return new ResponseEntity<StreamingResponseBody>(HttpStatus.BAD_REQUEST);
}
}
}
You need to use StreamingResponseBody in your ResponseEntity.
If you need a ready to use microservice feel free to explore the github project s3-streamer I wrote for very same purpose.

Is it possible to add to IConfiguration after the WebHost has started?

I am using AWS Systems Manager Parameter Store to hold database connection strings which are used to dynamically build a DbContext in my .NET Core Application
I am using the .NET Core AWS configuration provider (from https://aws.amazon.com/blogs/developer/net-core-configuration-provider-for-aws-systems-manager/) which injects my parameters into the IConfiguration at runtime.
At the moment I am having to keep my AWS access key/secret in code so it can be accessed by the ConfigurationBuilder but would like to move this out of the code base and stored it in appsettings or similar.
Here is my method to create the webhost builder called at startup
public static IWebHostBuilder CreateWebHostBuilder(string[] args)
{
var webHost = WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();
AWSCredentials credentials = new BasicAWSCredentials("xxxx", "xxxx");
AWSOptions options = new AWSOptions()
{
Credentials = credentials,
Region = Amazon.RegionEndpoint.USEast2
};
webHost.ConfigureAppConfiguration(config =>
{
config.AddJsonFile("appsettings.json");
config.AddSystemsManager("/ParameterPath", options, reloadAfter: new System.TimeSpan(0, 1, 0)); // Reload every minute
});
return webHost;
}
I need to be able to inject the BasicAWSCredentials parameter from somewhere.
You need to access an already built configuration to be able to retrieve the information you seek.
Consider building one to retrieve the needed credentials
public static IWebHostBuilder CreateWebHostBuilder(string[] args) {
var webHost = WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.Build();
var access_key = configuration.GetValue<string>("access_key:path_here");
var secret_key = configuration.GetValue<string>("secret_key:path_here");
AWSCredentials credentials = new BasicAWSCredentials(access_key, secret_key);
AWSOptions options = new AWSOptions() {
Credentials = credentials,
Region = Amazon.RegionEndpoint.USEast2
};
webHost.ConfigureAppConfiguration(config => {
config.AddJsonFile("appsettings.json");
config.AddSystemsManager("/ParameterPath", options, reloadAfter: new System.TimeSpan(0, 1, 0)); // Reload every minute
});
return webHost;
}
I would also suggest reviewing Configuring AWS Credentials from the docs to use the SDK to find a possible alternative way to storing and retrieving the credentials.

AWS DataPipelineClient - listPipelines returns no records

I am trying to access my AWS DataPipelines using AWS Java SDK v1.7.5, but listPipelines is returning an empty list in the code below.
I have DataPipelines that are scheduled in the US East region, which I believe I should be able to list using the listPipelines method of the DataPipelineClient. I am already using the ProfilesConfigFile to authenticate and connect to S3, DynamoDB and Kinesis without a problem. I've granted the PowerUserAccess Access Policy to the IAM user specified in the config file. I've also tried applying the Administrator Access policy to the user, but it didn't change anything. Here's the code I'm using:
//Establish credentials for connecting to AWS.
File configFile = new File(System.getProperty("user.home"), ".aws/config");
ProfilesConfigFile profilesConfigFile = new ProfilesConfigFile(configFile);
AWSCredentialsProvider awsCredentialsProvider = new ProfileCredentialsProvider(profilesConfigFile, "default");
//Set up the AWS DataPipeline connection.
DataPipelineClient dataPipelineClient = new DataPipelineClient(awsCredentialsProvider);
Region usEast1 = Region.getRegion(Regions.US_EAST_1);
dataPipelineClient.setRegion(usEast1);
//List all pipelines we have access to.
ListPipelinesResult listPipelinesResult = dataPipelineClient.listPipelines(); //empty list returned here.
for (PipelineIdName p: listPipelinesResult.getPipelineIdList()) {
System.out.println(p.getId());
}
Make sure to check if there are more results - I've noticed sometimes the API returns only few pipelines (could even be empty), but has a flag for more results. You can retrieve them like this:
void listPipelines(DataPipelineClient dataPipelineClient, String marker) {
ListPipelinesRequest request = new ListPipelinesRequest();
if (marker != null) {
request.setMarker(marker);
}
ListPipelinesResult listPipelinesResult = client.listPipelines(request);
for (PipelineIdName p: listPipelinesResult.getPipelineIdList()) {
System.out.println(p.getId());
}
// Call recursively if there are more results:
if (pipelineList.getHasMoreResults()) {
listPipelines(dataPipelineClient, listPipelinesResult.getMarker());
}
}

How to poll in AWS SDK Java?

I am new to AWS sdk java. I am trying to write a code through which I want to control the instance and to get all AWS EC2 information.
I am able to start an instance and also stop it. But as you all must be aware that it takes some time to start an instance, so I want to wait there (don't want to use Thread.sleep) till it's up or when I'm stopping an instance it should wait there till I proceed to the next step.
Here's the code:
AmazonEC2 ec2 = = new AmazonEC2Client(credentialsProvider);
DescribeInstancesResult describeInstancesRequest = ec2.describeInstances();
List<Reservation> reservations = describeInstancesRequest.getReservations();
Set<Instance> instances = new HashSet<Instance>();
for (Reservation reservation : reservations) {
instances.addAll(reservation.getInstances());
}
for (Instance instance : instances) {
if ((instance.getInstanceId().equals("myimage"))) {
List<String> instancesToStart = new ArrayList<String>();
instancesToStart.add(instance.getInstanceId());
StartInstancesRequest startr = new StartInstancesRequest();
startr.setInstanceIds(instancesToStart);
ec2.startInstances(startr);
Thread.currentThread().sleep(60*1000);
}
if ((instat.getName()).equals("running")) {
List<String> instancesToStop = new ArrayList<String>();
instancesToStop.add(instance.getInstanceId());
StopInstancesRequest stoptr = new StopInstancesRequest();
stoptr.setInstanceIds(instancesToStop);
ec2.stopInstances(stoptr);
}
Also, I'd like to say that whenever I try to get the list of images it hangs in the below code.
DescribeImagesResult describeImagesResult = ec2.describeImages();
You can get an instance of the class "Instance" every time you want to see the updated status with the same "instance Id".
Instance instance = new Instance(<your instance id that you got previously from describe instances>);
To get the updated status with something like this:
InstanceStatus instat = instance.getStatus();
I think the key here is saving the "instance id" of the instance that you care about.
boto in Python has an nice method instance.update() that can be called on an instance and you can see its status but I can't find it in Java.
Hope this helps.

How to request "Snapshot Log" through AWS Java SDK?

Is it possible to request "Snapshot Logs" through AWS SDK somehow?
It's possible to do it through AWS console:
Cross posted to Amazon forum.
Requesting a log snapshot is a 3 step process. First you have to do an environment information request:
elasticBeanstalk.requestEnvironmentInfo(
new RequestEnvironmentInfoRequest()
.withEnvironmentName(envionmentName)
.withInfoType("tail"));
Then you have to retreive the environment information:
final List<EnvironmentInfoDescription> envInfos =
elasticBeanstalk.retrieveEnvironmentInfo(
new RetrieveEnvironmentInfoRequest()
.withEnvironmentName(environmentName)
.withInfoType("tail")).getEnvironmentInfo();
This returns a list of environment info descriptions, with the EC2 instance id and the URL to an S3 object that contains the log snapshot. You can then retreive the logs with:
DefaultHttpClient client = new DefaultHttpClient();
DefaultHttpRequestRetryHandler retryhandler =
new DefaultHttpRequestRetryHandler(3, true);
client.setHttpRequestRetryHandler(retryhandler);
for (EnvironmentInfoDescription environmentInfoDescription : envInfos) {
System.out.println(environmentInfoDescription.getEc2InstanceId());
HttpGet rq = new HttpGet(environmentInfoDescription.getMessage());
try {
HttpResponse response = client.execute(rq);
InputStream content = response.getEntity().getContent();
System.out.println(IOUtils.toString(content));
} catch ( Exception e ) {
System.out.println("Exception fetching " +
environmentInfoDescription.getMessage());
}
}
I hope this helps!