Integrating Amazon RDS Schema Changes in to Azure Devops pipelines - amazon-web-services

We are currently using Azure DevOps to install applications to our AWS environment using AWS Toolkit for Azure DevOps. We have a use case to integrate our RDS (MySQL) schema changes in the Azure Devops pipelines to deploy the database changes.
We could not find any direct ways to implement this. The viable option that we found was to implement the database schemas changes as a Lambda using a database migration tool like Evolve (https://evolve-db.netlify.app/) and invoke the lambda from the pipeline . Any other approaches or recommendation is highly appreciated

Related

MySQL DB schema to AWS API to AWS Lambda to AWS RDS

I am building a serverless application using AWS, with AWS API, AWS Lambda functions, and AWS RDS (database).
I have an existing MySQL schema (basically, a table dump), and I want to create the API automatically from this schema, ideally something that I can easily import into AWS API Gateway (like something from SwaggerHub or similar service).
Then, I want to have the operations for the database (CRUD operations that match the API) also automatically generated for NodeJs or Python, which I can then easily deploy to AWS Lambda, for example using SAM templates, or maybe just uploaded as a package somehow to AWS.
The lambda operations should be able to connect to my AWS RDS database, and perform the CRUD operations described by the API.
The idea is to determine some way to simplify this process. If the database schema changes significantly, for example, I do not want to manually edit a bunch of lambda functions to accommodate the new DB schema every time!
I'm wondering if anyone has any suggestions as to how I could make this work.

Migrate Pipelines and files from one Azure DevOp services to another Azure DevOp services

First question: We need a way to migrate all the pipelines from the source environment to a new target environment. We also need to do this for the files. This is not an issue with the Azure DevOps server but having issued another one by one when using Azure DevOps Services to Azure DevOps Services.
Second: There is no guidance for best security governance for RBAC/AD and setting up organizations and projects to actually follow for a medium-sized development group that is migrating from another organization.
Any help would be greatly appreciated

How to deliver an AWS interview assignment

I received an assignment from a company. For that assignment, I created a Kinesis Firehose Delivery Stream, S3 buckets, a Lambda function, table and views in Athena, and a Quicksight dashboard by using the AWS console web site.
Then I developed a Python script for sending test data to the Kinesis Firehose Delivery Stream
The company requested an easily reproducible environment.
I created a virtual environment and requirements.txt file for the Python code.
How can I create a reproducible environment for the AWS user, role, policies, lambda function and all resources (stream, buckets etc)?
Thanks!
To create a reproducible AWS environment, you have to leverage a concept called Infrastructure as Code (IaC). Using native AWS, this can be done using AWS CloudFormation (declarative, using JSON or YAML) or the AWS Cloud Development Kit (CDK) (imperative, using TypeScript, JavaScript, Python, Java, or C#).
AWS CloudFormation provides a common language for you to model and provision AWS and third party application resources in your cloud environment.
The AWS Cloud Development Kit (AWS CDK) is an open source software development framework to model and provision your cloud application resources using familiar programming languages.
As a side note: If you're interviewing for a non-entry-level job involving AWS (which I assume, since you're expected to build something using Kinesis Firehose, Lambda functions, Athena, etc.), please be aware that you should be very familiar with the basic cloud concepts, like IaC.

AWS Amplify & Serverless-Stack

I am currently looking into AWS Amplify as well as I am reading Serverless Stack. My goal is to create a simple ToDo list app. Both "Getting started" / Documentations seem to have the same goal. However, AWS Amplify guide seems to be way easier from the setup.
And that's where I am confused. As far as I understand AWS Amplify also uses DynamoDB and gets data via GraphQL. But where is the difference between these two documentations?
Serverless Stack is a resource providing guidance on how to create serverless applications with AWS. It was created by a company called Anomaly Innovations.
AWS Amplify is an open source framework maintained by AWS which helps developers integrate their applications with AWS resources.
AWS Amplify is a very confusing service and consists of many components. I would categorize as follow.
AWS Amplify Console
AWS Amplify CLI
AWS SDK&Libraries to integrate to your mobile or web
AWS Appsync Transformer
AWS Amplify Console gives you the ability to easily to setup Continous Deployment for your Amplify project. Amplify Console use together with AWS Amplify CLI for you to manage different environments.
Let's say you want to start the Todo App. You start on your local using Amplify CLI and create API Gateway/Lambda/DynamoDB stacks.
Amplify CLI lets you create the whole stack easily and push it to AWS to deploy the whole stack. Then you can create a different environment based on the same stacks, let's say you want your dev environment, and QA environment and production environment.
Amplify CLI gives you all the commands necessary for you to achieve this, then if you want to auto-deploy the change to AWS when someone push the code to your Git repository, you can use the Amplify Console to set up exactly that.
Amplify Console also integrate with AWS Domain so, you can easily point your own domain to any of the environment.
On top of these, Amplify also provides, GraphQL Transformer, which you can easily define the GraphQL schema in Amplify format and it will transform and deploy to AWS Appsync. And there is a Mobile SDK which you can sync data between AppSync and you're mobile and provides some UIs as well.
We used one of our web projects and we liked it for Continues Deployment aspect of the Amplify, but we didn't like the AppSync(GraphQL) aspect of Amplify just b/c it was not easy to implement layered resolver.
Also, keep in mind that Amplify CLI/SDK/Transformer is under one project and it's still very fragile. You can take a look at the version history from https://www.npmjs.com/package/#aws-amplify/cli and you will see few version bump just in a single month. There were many obvious bugs we encounter, even on the AWS Console.
I haven't use the Serverless yet, but as long as I know, Serverless provides No1 and No2 of Amplify with greater stability.

How to migrate AWS Elasticsearch to Azure Elasticsearch?

Is there any procedure or documentation supported by Microsoft in order to migrate AWS Elasticsearch to Azure Elasticsearch? Do anyone knows the process to do so?
I don't know any information by Microsoft about it, but our AWS Elasticsearch Service supports manual snapshots, that you can store on S3 and can then use to restore a cluster elsewhere.
The people from Alibaba Cloud have a step-by-step post of how to migrate from AWS to their cloud, so you can take that as a starting point.