I'm having some issues with a Grafana deployment. After deploying Grafana, I cant change the default password for the admin account, which you have to do the first time you launch Grafana. I log in with the default credentials, then get prompted to enter a new password. When I do, I get an "unauthorized" error. Looking at the browsers console, it seems to give a 404 error when I try to submit a new password.
I'm using an RDS instance to store Grafana user data. The RDS instance is in the same subnet as the ECS cluster. I've attached the AmazonRDSDataFullAccess policy to the ECS task role but that did not help. I also tried making the RDS instance publicly available but that was also not helpful.
I'm using Grafana version 6.5.0. I was using the latest 7.1 but downgraded hoping it would solve my current issue.
Firstly make sure your RDS database has a security group allowing inbound access from the ECS cluster. This will grant you the inbound access to the RDS database that are required.
As Fargate is serverless, a node could be destroyed so any local configuration would be gone. As you're using RDS you should make sure you're using environment variables to specify the DB connection details.
Finally add these to your task definition, using the environment item. For secrets such as password for the RDS db use the secrets option.
Related
I have created an EKS cluster with the Managed Node Groups.
Recently, I have deployed Redis as an external Load Balancer service.
I am trying to to set up an authenticated connection to it via NodeJS and Python microservices but I am getting Connection timeout error.
However, I am able to enter into the deployed redis container and execute the redis commands.
Also, I was able to do the same when I deployed Redis on GKE.
Have I missed some network configurations to allow traffic from external resources?
The subnets which the EKS node is using are all public.
Also, while creating the Amazon EKS node role, I have attached 3 policies to this role as suggested in the doc -
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
It was also mentioned that -
We recommend assigning the policy to the role associated to the Kubernetes service account instead of assigning it to this role.
Will attaching this to the Kubernetes service account, solve my problem ?
Also, here is the guide that I used for deploying redis -
https://ot-container-kit.github.io/redis-operator/guide/setup.html#redis-standalone
I was tasked to spin up Windows 2019 servers (as per AWS documentation, this has SSM agent preinstalled) and disable port 3389 for RDP because the only access they want is via Amazon Systems Manager Session Manager.
I have attached the AmazonSSMManagedInstanceCore role which gives Session Manager permissions to access this server programmatically, but I still have issues accessing this server via Session Manager. Possible errors are:
The agent is not installed,
The required IAM role is not attached etc.
But I have done all this and am still unable to access this server.
So I want to be able to edit the UserData with a bootstrapping script that installs SSM agent and see if that fixes the issue.
My guess is maybe someone tampered with the server and deleted the SSM agent file.
This doesn't answer the question about a bootstrap script, as I am still researching on that
But I solved the issue I had with AWS Systems Manager Session Manager.
The SSM Agent was still installed in the servers.
Upon creating my VPC, I had created a private subnet and a VPC endpoint which Session Manager will use to talk to resources in that subnet, but I later on deleted the private subnet, since they (my company) wanted all servers in a public subnet.
Due to the VPC endpoints created, Session Manager wasn't able to locate the servers I was trying to connect to via Session Manager.
SOLUTION: After deleting the VPC endpoints, Session Manager now connects to all those servers with ease... Yay!!!
FYI: I still would love a bootstrap script that will install SSM Agent to Amazon EC2 Windows Servers upon launch.
I used Elastic Beanstalk to upload an application whilst studying, it was part of a group project. However the account got suspended when the billing details were incorrect, this discontinued the application services.
After resolving the account with Amazon the Elastic Beanstalk environment was up and running apart from the RDS instance. I then restored the most recent RDS backup but I can no longer access the MYSQL database with the previous details (Host, DBName and Password) and the application no longer works because the details to connect to the DB are wrong.
I then found out I cannot use a snapshot RDS instance with an existing EB environment, so I am doing the following steps:
Restore the database to a new RDS instance.
Make a manual backup of this new RDS instance. Create a new Beanstalk environment using your manual RDS backup.
Test to make sure everything is working as expected.
Update URLs or DNS to make sure traffic is routed to your
new environment
However I do not know how to do step two, can anybody help me on how to create a new EB environment using an RDS Instance Snapshot ?
(So I can access the DB)
This are the steps involved in creation of AWS Beanstalk.
In this step select the "create RDS" check box.
When you go to the RDS Configuration step select the snapshot of your database in the drop down and then proceed till the end.
When you set up a new Elastic Beanstalk cluster you can access your EC2 instance by doing this:
eb ssh
However, it's not clear how to access the RDS instance.
How do you access an RDS in an Elastic Beanstalk context in order to perform CRUD operations?
The RDS command-line can be accessed from anywhere, by adjusting the RDS security group.
Check your AWS VPC configuration.
The security-group will need to be
adjusted to allow you to connect from a new source/port.
Find the security Group-id for the RDS.
Find that group in AWS Console > VPC > secuirty groups
Adjust the Inbound and Outbound Rules accordingly.
You need to allow access to/from the IP or security group that needs to connect to the RDS.
FROM: https://stackoverflow.com/a/37200075/1589379
After that, all that remains is configuring whatever local DB tool you would like to use to operate on the database.
EDIT:
Of additional note, if the ElasticBeanstalk Environment is configured to use RDS, the EC2 Instances will have environment variables set with the information needed to connect to the RDS.
This means that you can import those variables into any code that needs access.
Custom environment variables may also be set in Elastic Beanstalk Environment Configuration, and these too may be included this way.
PHP
define('RDS_HOSTNAME', getenv('RDS_HOSTNAME'));
$db = new rds(RDS_HOSTNAME);
Linux CommandLine
mysql --host=$RDS_HOSTNAME --port=$RDS_PORT -u $RDS_USERNAME -p$RDS_PASSWORD
RDS is a managed database service, which means it is that you can only access it through database calls.
If it is a MySQL database you can access through your EC2 instance through mysql like this:
mysql -u user -p password -h rds.instance.endpoint.region.rds.amazonaws.com
or set it up to work with your app with settings needed for that.
Make sure that you set up security groups correctly so that your EC2/other service has access to your RDS instance.
Update:
If you want what you are asking for then you should use an EC2 instance with a mysql server on. It would cost the same (even though a fraction of performance is lost in comparison). An EC2 instance you can turn off when you are not using as well.
Do Heroku apps run on default VPC or do they run on custom VPC? (I assume by now everyone is using VPC and not the older EC2-Classic)
Does anyone one have information about the VPC id of Heroku (if they are using a custom VPC)?
Earlier they used the AWS account number: 098166147350
as per AWS Forum.
Do they still use the same account?
But, Heroku has recommended not to use account id to control access at Heroku DevCenter.
My idea is to restrict access to my service only to the VPC heroku is using. Also, I want to add a VPC peering connection from my VPC.
On top of this, I will add other security features to further restrict access only to the relevant apps.
Heroku's cedar stack currently still runs on EC2 classic.
The beta private spaces allow you to create a VPC and host your apps inside it.