Hello my friends I need your help.
I'm learning to use the AWS console, so I'm using the free version for 12 months.
However, I had a cost for the platform's mysql database, so I deleted the database instance, the automatic backups and also the snapshots, but when I went to delete the group parameters, the error below occurs:
Failed to delete default.mysql8.0: Default DBParameterGroup cannot be deleted: default.mysql8.0 (Service: AmazonRDS; Status Code: 400; Error Code: InvalidDBParameterGroupState; Request ID: cec752fc-6e77-4a42-b38b-c26b079a6e21; Proxy: null).
With this I have the following doubts:
How can I delete this group of parameters?
As I deleted the database instance, automatic backups and snapshots, will I continue to have a cost on the platform?
Thanks
I deleted the DB instance, automatic backups and snapshots.
I tried to delete the group parameters and the error occurred
If you create a DB instance without specifying a DB parameter group, the DB instance will use the default parameter group for the DB engine.
You cannot delete a default parameter group. This is specified in the AWS docs documentation for the DeleteDBParameterGroup action.
Parameter groups cost no money.
Related
I'm trying to create an autoscaling group manages EKS worker node provisioning. According to AWS' docs under the "Nodes fail to join cluster" section, in order for instances to join a cluster, the new instances must contain the tag kubernetes.io/cluster/my-cluster where my-cluster is the name of the cluster and the value of the tag must be owned. However, when the auto scaling group tries to provision new instances, I see the following error in the activity section:
Launching a new EC2 instance. Status Reason: Could not launch Spot
Instances. InvalidParameterValue -
'kubernetes.io/cluster/my-cluster' is not a valid tag
key. Tag keys must match pattern ([0-9a-zA-Z\-_+=,.#:]{1,255}), and
must not be a reserved name ('.', '.', '_index'). Launching EC2
instance failed.
Does anyone know why this is happening and how I can address this?
I worked with AWS Support and discovered the issue is coming from a new feature called instance tags on EC2 instance metadata service.
This feature provides an alternative solution to making API calls via AWS CLI by allowing developers to use the metadata service API to query instance tags. This is useful to reduce the number of API calls if you are having issues with exceeding the maximum number of requests to AWS.
However, this causes conflicts with auto scaling group when the special IAM key is required which includes non-supported characters.
The solution to the problem is to set 'Metadata accessible' to 'Don't include in launch template' or 'Disabled' when creating your launch template.
You can find this option when creating or modifying a launch template under: Advanced details section > Metadata accessible
I am working on a process to accomplish the following goal:
Copy a DB snapshot from one region to the next.
After copying the DB to the next region, I want to restore it.
I want to also put the newly restored DB instance into a separate VPC other than "default", so that I can have my AWS Workspaces clients get to it.
I've got steps 1 and 2 working perfectly; however, it doesn't seem like the process to restore a DB instance from a snapshot gives you a lot of options from the API. Here's the API process according to AWS's documentation: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/RDS/Client.html#restore_db_instance_from_db_snapshot-instance_method
According to those options, the only option I see available anywhere remotely related to the VPC is the vpc_security_ids; however, I tried specifying a vpc_security_id that belongs into the VPC that I want the RDS instance to be restored to, btu I got the following error in the console:
Aws::RDS::Errors::InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-76cf380e and the EC2 security group is in vpc-012999f6551c713c6
from /Users/nutella/.gem/ruby/2.6.0/gems/aws-sdk-core-3.113.1/lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call'
It's clearly possible to do this from the RDS console when restoring a snapshot, as you can just simply select the VPC that you want to restore the db instance into, but it's not clear on how to do this with the SDK.
I have 2 EC2 instances spawned using service catalog product provisioning. For some reasons, I have terminated them both and want to spawn new EC2 instances back (Not the terminated ones).
So I tried to update the product again from service catalog and was hoping the service catalog would create them back because the earlier instances are not present.
Product provisioning is successful and yet the EC2 instances are not created.
My product is actually a full stack comprising of some sub-stacks and one of the sub-stack actually creates the EC2 instance.
We could envision this as below -
Full Stack
Sub-Stack-1
Sub-Stack-2
Sub-Stack-3
Question is how to get the new EC2 instances created without having to terminate the full stack.
More info on permission for these -
I have 2 roles that I have used to achieve this. 1 role is used only to provision products from service catalog. Other is admin like role that I can use to terminate the EC2 instance. I just don't want to spawn the EC2's from the admin role and use the products to provision them.
AWS CloudFormation is not "aware" of resources changes made outside of its control. So, it currently thinks that the EC2 instances still exist, even though they have been terminated.
If you have sufficient permissions to use CloudFormation, you could:
Download the CloudFormation template that was deployed by Service Catalog
Remove the section that defines the EC2 instances
Update the stack by providing the edited template -- this will cause CloudFormation to terminate the instances (that are already terminated)
Edit the template and add back the instance definitions, then Update the stack again with this template (effectively the same template that was originally used to launch the stack) -- this should cause new instances to be deployed that match the original specification
I have one live AWS Sagemaker endpoint where we have auto scaled enabled.
Now I want to updated it from 'ml.t2.xlarge' to 'ml.t2.2xlarge' but it is showing this error
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the
UpdateEndpoint operation: The variant(s) "[config1]" must be deregistered as scalable targets with
Application Auto Scaling before they can be removed or have their instance type updated.
I believe we need to first de-register auto-scaling using this link
https://docs.aws.amazon.com/sagemaker/latest/dg/endpoint-auto-scaling-delete.html
but I doubt if will take our application down and the new model with training will take multiple hours. We can't afford this so please let me know if there are any better way to do it.
You should have no problem updating your Endpoint instance type without taking the availability hit. The basic method looks like this when you have an active autoscaling policy:
Create a new EndpointConfig that uses the new instance type, ml.t2.2xlarge
Do this by calling CreateEndpointConfig.
Pass in the same values you used for your previous Endpoint config. You can point to the same ModelName that you did as well. By reusing the same model, you don't have to retrain it or anything
Delete the existing autoscaling policy
Depending on your autoscaling, you might want to increase the desired count of your Endpoint in the event it needs to scale while you are doing this.
If you are experience a spike in traffic while you are making these API calls, you risk an outage of your model if it can't keep up with traffic. Just keep this in mind and possibly scale in advance for this possibility.
Call UpdateEndpoint like you did previously and specify this new EndpointConfigName
Wait for your Endpoint status to be InService. This should take 10-20 mins.
Create a new autoscaling policy for this new Endpoint and production variant
You should be good to go without sacrificing availability.
I am new to AWS. After invalid deployment my environment cloudapp went to the Grey state. I have created another environment cloudapp-1and successfully uploaded and deployed my app. Then I swap the URLs to keep the first address still working.
Now when my first env is in the Grey state I am not able to do anything with it. I am not able to deploy, rebuild or even terminate it. I receive errors like this ones below.
Stack deletion failed: The following resource(s) failed to delete: [awseb-xxx-AWSEBSecurityGroup].
2016-07-13 13:23:32 UTC+0200 ERROR Deleting security group named: awseb-xxx-AWSEBSecurityGroup failed Reason: resource sg-xxxxxxx has a dependent object
I have tried to remove AWSEBSecurityGroup from cloudapp but i cannot because:
Error
Unable to validate settings: Environment named cloudapp is in an invalid state for this operation. Must be Ready.
It looks like kind of deadlock. I cannot delete the env because of a security group and I cannot change that group because the env is not Ready.
How to fix it?
First make sure that no other instances than the ElasticBeanstalk EC2 instances belonging to this particular environment is using the sg-xxxxxx security group.
Then you must make sure that you do not have any depending objects of that security group, like the error message vaguely states. Go to EC2 > Security Groups and search by Source/Destination (Group Id) for the sg-xxxxxx group.
This will give you a list of all security groups having rules referencing sg-xxxxxx. Once you've removed the depending rules you can retry your ElasticBeanstalk operation.