How to access AWS ElasticSearch Service automated snapshots/backups? - amazon-web-services

I was under the impression AWS ElasticSearch service comes with automated snaphshots/backups. Thats what I find in the documentation. It suggests they happen once a day and are stored on s3 but I do not see any backups in any of my s3 buckets. How do you get access to the automated snapshots?
It probably doesn't matter but I used the following template to create my elasticsearch domain explicitly indicating I want automated backups.
CloudFormation
"SnapshotOptions": {
"AutomatedSnapshotStartHour": "0"
}

You can't get to the S3 bucket itself but you can restore from the backup stored inside it by using CURL or another HTTP client to communicate directly with your cluster telling it to rebuild from the "cs-automated" repository which is linked to the s3 snapshots. To be able to communicate with your ES cluster directly via HTTP you'll have to temporarily open an IP access policy to your cluster.
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-snapshots.html#es-managedomains-snapshot-restore

Related

Regularly pull files from On-Prem server to S3 using AWS Transfer family

I'm trying to prepare a flow where we can regularly pull the available new files in third parties' on-prem server to our S3 using AWS Transfer family.
I read this documentation https://aws.amazon.com/blogs/storage/how-discover-financial-secures-file-transfers-with-aws-transfer-family/, but it was not clear on setting up and configuring the process.
Can someone share any clear documentation or reference links on using AWS Transfer Family to pull files from external on-prem server to our S3?
#Sampath, I think you misunderstood the available features of the AWS Transfer service. That service is actually acting as a serverless SFTP with AWS S3 as the backend storage to which you can connect via SFTP protocol (now supports FTP and FTPS as well). You can either PUSH data to S3 or PULL data from S3 via AWS Transfer service. You cannot PULL data into S3 from anywhere else via AWS Transfer service alone.
You may have to use any other solution like a Python Script running on AWS EC2 for that purpose.
Another solution would be to connect the external third-party server to the AWS Transfer Service and that server PUSHES files on S3 via AWS Transfer.
As per your use case, I think you need a simple solution that connects to an external third-party server and copies files from it to the AWS S3 bucket. It can be done via a Python script as well and you can run it on either AWS EC2, AWS ECS, AWS Lambda, AWS Batch, etc, depending on the specifications and requirements.
I have used AWS Transfer once I found it to be very expensive and went on with AWS EC2 instead. In the case of AWS EC2, you can even buy reserved instances to further reduce the cost. If the task is just about copying files from an external server to S3 and the copy job will never take more than 10 minutes, then it is better to run it on AWS Lambda.
In short, you cannot PULL data from any server into S3 using the AWS Transfer service. You can only PUSH data to or PULL data from S3 using the AWS Transfer service.
References to some informative blogs:
Centralize data access using AWS Transfer Family and AWS Storage Gateway
How Discover Financial secures file transfers with AWS Transfer Family
Moving external site data to AWS for file transfers with AWS Transfer Family
Easy SFTP Setup with AWS Transfer Family
With the AWS Transfer Family service you can create servers that uses SFTP, FTPS, and FTP protocols for your file transfers, and use the Amazon S3 and EFS as domains to store and access your files.
To connect your on-premise servers with the Transfer Family server you will need to use a service like File Gateway/Storage Gateway and connect via HTTPS to S3 to sync your files.
Your architecture will be something like this:
If you want more details of how to connect with your on-premises servers with the AWS S3/Transfer Family services take a look on this blog post: Centralize data access using AWS Transfer Family and AWS Storage Gateway

Is copying an S3 bucket from one AWS account to another AWS account secure in transit?

I am looking to copy the contents of one S3 bucket to another S3 bucket in a different account.
I found the following tutorial and tested it with non confidential files - https://medium.com/tensult/copy-s3-bucket-objects-across-aws-accounts-e46c15c4b9e1
I am wondering if any data that is transferred between accounts using this method is secure - as in encrypted in transit. Is it using AWS to do a direct copy or is it using the computer running the sync as the middle man - download to the computer then uploading to the destination bucket.
I do have AES-256 (Use Server-Side Encryption with Amazon S3-Managed Keys) enabled on the source S3 bucket.
I did see a recommendation about using AWS-KMS but it was not clear if that would do what I need.
Just want to make sure the S3 transfer between one account to the other is secured!
When using the cp or sync commands, the objects are always copied "within S3". The objects are not downloaded and uploaded.
If you are copying data between buckets, and the buckets are in the same region then the traffic is totally within the AWS "backplane", so it never goes to the Internet or to a VPC. I believe that it is also encrypted while being copied.
If you are copying between regions, the data is encrypted as it travels across the AWS network between the regions. (Note: Data Transfer charges will apply.)
As you're using the AWS CLI it will default to using HTTPS according to the documentation.
By default, the AWS CLI sends requests to AWS services by using HTTPS on TCP port 443. To use the AWS CLI successfully, you must be able to make outbound connections on TCP port 443.
You can also ensure that no plain text actions can be performed with the AWS CLI by utilizing the "aws:SecureTransport": "false" condition within a bucket policy.
Take a look at the What S3 bucket policy should I use to comply with the AWS Config rule s3-bucket-ssl-requests-only? documentation for an example bucket policy using this condition.

Ingest files from GCP Cloud Storage to AWS S3 bucket dynamically

I am working on a pet project based on multi-cloud (AWS and GCP) which is based on serverless architecture.
Now there are files generated by the business logic within GCP (using Cloud Functions and Pub/Sub) and they are stored in GCP Cloud storage. I want to ingest these files dynamically to AWS S3 bucket from the Cloud Storage.
One possible way is by using the gsutil library (Exporting data from Google Cloud Storage to Amazon S3) but this would require a compute instance, and run the gsutil commands manually which I want to avoid.
In answering this I'm reminded a bit of a Rube Goldberg type setup but I don't think this is too bad.
From the Google side you would create a Cloud Function that is notified when a new file is created. You would use the Object Finalize event. This function would get the information about the file and then call an AWS Lambda fronted by AWS API Gateway.
The GCP Function would pass the bucket and file information to the AWS Lambda. On the AWS side you would have your GCP credentials and the GCP API download the file and upload it to S3.
Something like:
All serverless on both GCP and AWS. Testing isn't bad as you can keep them separate - make sure that GCP is sending what you want and make sure that AWS is parsing and doing the correct thing. There is likely some authentication that needs to happen from the GCP cloud function to API gateway. Additionally, the API gateway can be eliminated if you're ok pulling AWS client libraries into the GCP function. Since you've got to pull GCP libraries into the AWS Lambda this shouldn't be much of a problem.

Accessing AWS S3 from within google GCP

We were doing most of our cloud processing (and still do) using AWS. However, we also now have some credits on GCP and would like to use and want to explore interoperability between the cloud providers.
In particular, I was wondering if it is possible to use AWS S3 from within GCP. I am not talking about migrating the data but whether there is some API which will allow AWS S3 to work seamlessly from within GCP. We have a lot of data and databases that are hosted on AWS S3 and would prefer to keep everything there as it still does the bulk of our compute.
I guess one way would be to transfer the AWS keys to the GCP VM and then use the boto3 library to download content from AWS S3 but I was wondering if GCP, by itself, provides some other tools for this.
From an AWS perspective, an application running on GCP should appear logically as an on-premises computing environment. This means that you should be able to leverage the services of AWS that can be invoked from an on-premises solution. The GCP environment will have Internet connectivity to AWS which should be a pretty decent link.
In addition, there is a migration service which will move S3 storage to GCP GCS ... but this is distinct from what you were asking.
See also:
Getting started with Amazon S3
Storage Transfer Service Overview

Voice message save in aws s3 bucket using Amazon Connect

how to save voice message of customer number and store in an s3 bucket using aws connect. I made a contact workflow but I am not understanding how to save voice message to s3 bucket?
We've tried many ways to build a voicemail solution, including many of the things you might have found on the web. After much iteration we realized that we had a product that would be useful to others.
For voicemail in Amazon Connect, take a look at https://amazonconnectvoicemail.com as a simple, no-code integration that can be customized to meet the needs of your customers and organization!
As soon as you enabled Voice Recording all recordings are placed automatically in the bucket you defined at the very beginning as you setup your AWS Connect Instance. Just check your S3 Bucket if you can spot the recordings.
By default, AWS creates a new Amazon S3 bucket during the
configuration process, with built-in encryption. You can also use
existing S3 buckets. There are separate buckets for call recordings
and exported reports, and they are configured independently.
(https://docs.aws.amazon.com/connect/latest/adminguide/what-is-amazon-connect.html)
The recording in S3 is only starting when an agent is taking the call. Currently, there is no direct voice mail feature in Amazon connect. You can forward the call to a service that allows it, such as Twillio.