CloudFormation is it part of the architecture of my project? [closed] - amazon-web-services

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I use CDK for all infrastructure on AWS, but has many problems (problems like ROLLBACK, which a cant make deploy, need destroy all stacks related and then make deploy again) with CloudFormation and people on work saying that should't not consider on part of my architecture and not should discuss about that. When say architecture i referer on all technologies i used to build my projects.
May I consider that I should discuss CloudFormation in my architecture and know and is it part?

It absolutely does make sense discussing CF/IaaC in the frame of your architecture.
It is defilitely "part".
I'd discuss this in Deployment view.

Related

Need basic AIOps implementation example for log monitoring in AWS [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 months ago.
Improve this question
AIOps seems like a very interesting topic. I also watched AWS Summit presentation on this.
I have a logging nad monitoring solution where all system and application logs of all EC2 and EKS are forwarded cloudwatch log group.
To get a jump start on AIOps, how can I use AI to predict/preempt incidents?
Just discovered cloudwatch already provides a fair amount of IOps already built in.
"Cloudwatch Anomaly Detection" - this seems to do the trick!
Very nice video on this available here - https://youtu.be/8umIX-pUy3k

Replicate an AWS instance [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have one EC2 p3.large instance where I have installed several libraries, I want to make an exact replica of this instance as a backup. I need that this clone includes all the installed libraries, in that sense, something similar to a what a docker container does.
I have tried just to clone the instance as shown here:
https://docs.bitnami.com/aws/faq/administration/clone-server/
But this does not kept the installed libraries and files from the original instance to the new one.
You can make an AMI of the current instance and use it for back up anytime.
Related docs here

Better User authentication?(aws cognito or Oauth2 or okta) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
We have a requirement for a project and we are planning to use the User management and authentication service of 'Oauth2'.
Our application will be on AWS so we also wanted to check with AWS Cognito.
Could anyone help us decide, which is the better option to go with?
I would proceed as follows:
Build apps in a standards based / portable manner, via certified open source libraries
Start with Cognito and see if it meets your requirements / identify it's limitations. Avoid vendor specific libraries unless there is a good reason.
If you need to switch vendors you will be able to do so quite easily, since your apps will not be locked into AWS
Out of interest I built all of the samples on my Quick Start Page using Cognito. It is a good place to start because it is stable and low cost.
As a rule of thumb, no vendor solution works perfectly - there will always be gaps between what you want and what they provide.

Why use Redis with PostgreSQL, why not just one of them? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have seen all over the web that people are configuring their PostgreSQL along side Redis. My question is, why would someone want to use in-memory storage system like Redis, when they have already configured permanent storage system like PostgreSQL.
I get that Redis works with RAM and that it is much faster, but is that the only reason?
There might be lot of combinations why people use that stack, but this is not necessary for all sites. That might be used, for example, for counting most visited pages, or Redis is good brocker for using with async tasks, like Celery. But yep, on my oponion the only reason to use it - is speed.

Amazon S3 Usage reports by customer [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
For our videoplatform we store all of our videofiles in AWS S3 (sometimes deliver them on CloudFront). Customers are divided into groups; for every group we created a bucket with a Cost A. Tag.
So at this point we can monitor storage and streaming costs for a group. But for a new project we are required to get those reports based on the customers.
What should be the best approach? We could create a bucket for every customer, but i'm not a fan of that.
We could inspect the access logs; but according to the manual they can be "wrong".
Any suggestions?
The documentation is only hedging against the occasional lost or delayed log file. They are not guaranteed to be perfect, but in practice, they are reliable. I get the sense that the purpose of the disclaimer is to avoid petty disputes, rather than significant discrepancies.
Consider using the logs to do your own reporting on your existing projects, where you already know the costs... and compare those results to the results you get with the tag-based billing setup. If the answers are consistent, the problem seems effectively solved.