AWS Kedra to UI - amazon-web-services

I am trying to merge the Aws Kendra search to my UI search bar but I am not knowing where to start can anybody please tell how can i do this.
I have been reading the amazon documents on how to create index for kendra and sync with the data source and also tried the hands on for the same.
I have already created a UI with html ,css and joined with javascript to send video files to s3 , now what I want to do is if any User searches any query or anything in the video should come as a result on the UI with using AWS kendra.

Have you tried to look at Deploying Amazon Kendra? This page provides documentation on kick starting your Kendra integration with some starter code. Although this might be different than your current UI setup, it would cover how a website calls the Kendra backend on their Query APIs and process its Query results.

Related

how to continue uploading data including images to rest api like facebook posting

I am working in building a real estate application
using spring boot java and the application in the aws server and images uploading in s3 bucket
what i want is that when the user add a propery if the the user close the app
the uploading task continue in the background and notify when complete
I found a resumeUpload method looks like just you want here.
Also an upload example in the aws s3 document.
Wish this help.

Is there an API for programmatically creating saved searches in Google Logs Explorer?

Using Google Logs Explorer it's possible to save searches and even share them with others. I'd like to manage a set of shared searches programatically so that my team and I can store search definitions in Git and have automation apply any updates to these definitions.
Does anyone know if there is an official API for managing and sharing these saved searches? I don't mind if this is a REST API, a Golang API, or through gcloud commands.
Thanks in advance!
I do not think there is an API that is specific to Logs explorer. But rather for cloud logging that writes log entries and manages the Cloud Logging configuration. Using the interface to create the saved searches and share the queries may help and you are able to download the logs in CSV or JSON format.
Also you can use regular expressions in the Query builder and with gcloud command-line tool.
Please review these features in the Logs Explorer.

Architecture of a product search engine

I have developed a product search engine using AWS API Gateway, Lambda, ElasticSerach services. When user queries for a product, web server forward those request to API GW, then Lambda function is called and Lambda runs a search query in ES and sends back the results.
This works fine but I want query results to be cached somewhere if it returns more than 40 items so users can move to next page and so on without triggering the Lambda again and agin.
Is there an easy way of doing this without creating a web service using Node.js or something like that, I was checking ElastiCache but I couldn't find a proper use case.
Any idea how this can be achieved without maintaining a 24/7 web service?
Thanks!

Extracting Sitecore Analytic Data

I'm new to Sitecore concepts, and I'm searching almost five days for an answer. I couldn't find what I'm looking for.
I'm trying to access Sitecore analytic data from a web service. I found a web service of Sitecore with using this document. I want to extract data that are about this. I believe this is a public demo site.
I want to access Web Api, then extract data and use it at my own project. Any idea?
When working with Sitecore 9, the API you want to use to connect to analytics data is known as xConnect. A secure connection is required using trusted certificates, so you cannot connect to an existing instance that somebody else setup like the Habitat demo you linked to.
xConnect is an abstraction API that allows you to collect and search all data in the xDB. The architecture fully supports both vertical and horizontal scaling of xConnect services separate from your Sitecore installations.
Resources
You can read more about xConnect here in the official developer documentation: https://doc.sitecore.net/developers/xp/xconnect/
There is an xConnect tutorial available here: https://doc.sitecore.net/developers/xp/getting-started/#tutorials-xconnect
I also have a small tutorial you can use on GitHub to start learning the concepts: https://github.com/jst-cyr/XConnectTutorial

What is the recommended way to handle large file uploads to s3?

I'm using AWS SDK for Ruby to upload large files from users to s3.
The server is a sinatra app with a POST /images endpoint accepting multipart/form-data. I'm experiencing a noticeable delay with user uploads. This is to be expected, because it's making a request to s3 synchronously. I wanted to move this to a background job using something like Sidekiq, but I'm not sure I like that solution.
I read online that some people are promoting direct uploads to s3 on the client side. Some even called this a "best practice." I'm hesitant to do this for several reasons:
My client side code would be heavily tied down to my cloud provider. I love AWS (great experiences), but I like to remain somewhat cloud-agnostic. I don't want my mobile and web apps to have to know the details of my AWS setup. If I choose to move away from s3 at a later date (unlikely but plausible), I would want this to be a seamless transition. Obviously, this works ok for a web app, because I can always redeploy quickly. However, I have to worry about mobile. Users may not update, and everything will become a lot more complicated if some users are uploading to s3 and some are uploading to another service.
Business logic regarding determining which bucket and region to use would need to either exist on the client side or I'd need to expose an endpoint for determining which bucket and region to use for each user. Then, I'd have to make a request to my server to figure out the parameters before I can begin uploading to s3. I want to be able to change buckets or re-route users to alternative regions and so I'm not a fan of this tight coupling or the additional request.
Security is a huge concern. When files are uploaded and processed through my server, I can utilize AWS IAM to properly ensure that these files are only coming from my server. I believe that I have to grant an "all-write" privilege to users which is problematic. If I use AWS IAM credentials in JavaScript, I do not see how you can ensure that users do not get unlimited write access to my bucket. All client side javascript, can be read by a user. In addition, I'm unaware of how to process validations. On my server, I can scan the files and determine whether or not to upload to s3. If I upload directly from the client, I would have to move this processing into lambda functions. I'm ok with that, but there is a chance the object could be retrieved by users before the processing has occurred. Then, I'd have to build some sort of locking system to prevent access before processing.
So, the bottom line is I have no idea where to go from here. I've hacked around some solutions, but I'm not thrilled with any of them. I'd love to learn how other startups and enterprises are tackling this kind of problem. What would you recommend? How would you counter my argument? Forgive me if I'm missing something, I'm still relatively an AWS-newbie.
If you're worried about changing the post service I would suggest using an API and that way you can change the backed storage for your service. The mobile or web client would call the service and then your api would place the file where it needed to go. The api you have more control over and you could just created a signed s3 url to send to the client and let them still do the uploading.
An api, like in 1, solves this problem too, the client doesn't have to do all the work.
Use Simple Token Services and Temporary Security Credentials.
I agree with strongjz, you should use an API to upload your files from the server side.
Cloudinary provides an API for uploading images and videos to the cloud.
From what I know from my experience in using Cloudinary it is the right solution for you.
All your images, videos and required metadata are stored and managed by Cloudinary in Amazon S3 buckets owned by Cloudinary.
The default maximum file size limit for videos is 40MB. This can be customized for paid plans.
For example in Ruby:
Cloudinary::Uploader.upload("sample_spreadsheet.xls", :resource_type =>
:raw)