how to integrate hana cloud platform with AWS vora instance - vora

I am having access to HANA cloud platform and AWS Vora instance. Is it possible to integrate them as we do the inpremise integration.

Precondition is that the servers where Vora is located and where HANA is located can communicate with each other. If that is the case you can access HANA from Vora by using the HANA datasource. You can also access Vora from HANA by using the Spark Controller (separate installation; not included in current AWS developer edition).

Related

Integrating Amazon RDS Schema Changes in to Azure Devops pipelines

We are currently using Azure DevOps to install applications to our AWS environment using AWS Toolkit for Azure DevOps. We have a use case to integrate our RDS (MySQL) schema changes in the Azure Devops pipelines to deploy the database changes.
We could not find any direct ways to implement this. The viable option that we found was to implement the database schemas changes as a Lambda using a database migration tool like Evolve (https://evolve-db.netlify.app/) and invoke the lambda from the pipeline . Any other approaches or recommendation is highly appreciated

Can we use source database endpoints which are hosted on other servers than AWS server for using AWS DMS?

Can we use source database endpoints which are hosted on other servers than AWS server for using AWS DMS? If yes, can you help me by sharing any tutorials.
I have seen this image in AWS official documentation.
Thankyou!
Yes, you can use DMS to migrate an on-prem (or diff cloud) db to AWS and vice versa.
In same documentation from where you take snapshot for e.g you have part "Migrating an On-Premises Oracle Database to Amazon Aurora MySQL".
If you check your screenshoot you can notice "The source or target database must be on an AWS service"
You have a very nice explanation in this blog article:
https://medium.com/workfall/how-to-do-database-migration-using-aws-database-migration-service-dms-from-on-premise-ec2-to-rds-d46b9144d3cc

Azure Data factory to SQL server in AWS

I am new to Azure Data Factory (ADF) and would like to know whether it is technically possible to use ADF to copy data from a source in AWS (not Azure) environment and put it to a sink in another AWS environment. I am aware that we need Integration Runtime (IR) to connect to the source. Can we achive copying to AWS as well using IR?
According to this document
Data stores with * can be on-premises or on Azure IaaS, and require you to install Data Management Gateway on an on-premises/Azure IaaS machine.
But this does not say that we can/cannot transfer to AWS environment.
You are referencing ADF V1 doc. You could reference ADF V2 doc for ADF V2 support more data store.
Currently, ADF V2 support Amazon Marketplace Web Service as Source, but not sink. But you could take a look of generic ODBC if you have odbc driver for your aws sql server.

Storing cache/session(Key and Value) in AWS Elasticache Redis cluster using .net SDK

I have to Store the session logs(in form of Key, Value) in Elasticache Redis cluster using AWS .net SDK.
But I could see only memcached cluster related methods in .net SDK. But I need to store the logs in RedisCluster.
* Could anyone Suggest the steps to store the logs in Redis Clster using AWS .net SDK.
Thanks,
Prakash
For both Memcache and Redis clusters on AWS, you do not read/write data using the AWS SDKs. Just like databases on RDS, the AWS SDK helps you managed the servers/instances. They are not used for data access.
To access Redis data in ElastiCache, you would use any of the publicly available Redis client libraries for C#:
List of C# Clients on redis.io

Accessing AWS Elastic Search with Spring Data

I know that AWS Elastic Search doesn't support the transport-client (only access via http/80) so is there another way of leveraging AWS Elastic Search using Spring Data?
AWS now supports ES 2.x so getting more interesting to use.
Any help or progress on this topic?
rgds
Colin