AWS Command Line Interface - Starts a DB instance - amazon-web-services

As far as I Know Amazon RDS Supports Stopping and Starting of Database Instances.
I am running the instance from a Mac OS Sierra
I want to start a DB instance using the AWS Command Line Interface (following this tutorial: http://docs.aws.amazon.com/cli/latest/reference/rds/start-db-instance.html)
But somehow I got an error:
MacBook-Pro-de-lopes:~ lopes$ aws rds start-db-instance lopesdbtest
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:
add-source-identifier-to-subscription | add-tags-to-resource
apply-pending-maintenance-action | authorize-db-security-group-ingress
copy-db-cluster-snapshot | copy-db-parameter-group
copy-db-snapshot | copy-option-group
create-db-cluster | create-db-cluster-parameter-group
create-db-cluster-snapshot | create-db-instance
create-db-instance-read-replica | create-db-parameter-group
create-db-security-group | create-db-snapshot
create-db-subnet-group | create-event-subscription
create-option-group | delete-db-cluster
delete-db-cluster-parameter-group | delete-db-cluster-snapshot
delete-db-instance | delete-db-parameter-group
delete-db-security-group | delete-db-snapshot
delete-db-subnet-group | delete-event-subscription
delete-option-group | describe-account-attributes
describe-certificates | describe-db-cluster-parameter-groups
describe-db-cluster-parameters | describe-db-cluster-snapshots
describe-db-clusters | describe-db-engine-versions
describe-db-instances | describe-db-log-files
describe-db-parameter-groups | describe-db-parameters
describe-db-security-groups | describe-db-snapshot-attributes
describe-db-snapshots | describe-db-subnet-groups
describe-engine-default-cluster-parameters | describe-engine-default-parameters
describe-event-categories | describe-event-subscriptions
describe-events | describe-option-group-options
describe-option-groups | describe-orderable-db-instance-options
describe-pending-maintenance-actions | describe-reserved-db-instances
describe-reserved-db-instances-offerings | download-db-log-file-portion
failover-db-cluster | list-tags-for-resource
modify-db-cluster | modify-db-cluster-parameter-group
modify-db-instance | modify-db-parameter-group
modify-db-snapshot-attribute | modify-db-subnet-group
modify-event-subscription | promote-read-replica
purchase-reserved-db-instances-offering | reboot-db-instance
remove-source-identifier-from-subscription | remove-tags-from-resource
reset-db-cluster-parameter-group | reset-db-parameter-group
restore-db-cluster-from-snapshot | restore-db-cluster-to-point-in-time
restore-db-instance-from-db-snapshot | restore-db-instance-to-point-in-time
revoke-db-security-group-ingress | add-option-to-option-group
remove-option-from-option-group | wait
help
Invalid choice: 'start-db-instance', maybe you meant:
* reboot-db-instance
* create-db-instance

You need to update to the latest version of the AWS CLI tool. The version you currently have installed was released before the RDS start/stop feature was available.

It is a new feature (Announced on Jun 1, 2017). You have to upgrade your AWS CLI.
Amazon RDS Supports Stopping and Starting of Database Instances

Related

How to get the main container only in the ECS describe-tasks query?

I need to get a list of running ECS tasks with their image names/tags.
Trying in 2 steps:
Extracting task ARNs
ARNS=$(aws ecs list-tasks --cluster $CLUSTER_NAME \
--desired-status 'RUNNING' --query 'taskArns' \
--output json --profile $PROFILE)
Describing tasks
aws ecs describe-tasks --cluster $CLUSTER_NAME --tasks $ARNS \
--profile $PROFILE --output table \
--query "tasks[] | [].[startedAt,containers[0].image]"
The problem is I have multiple sidecar containers in each task, and their order is inconsistent, so containers[0] returns a random container every time.
Example output
-----------------------------------------------------------------------------------------------
| DescribeTasks |
+-----------------------------------+---------------------------------------------------------+
| 2022-08-15T21:01:22.513000-07:00 | lacework/datacollector:latest-sidecar |
| 2022-08-15T21:01:21.511000-07:00 | lacework/datacollector:latest-sidecar |
| 2022-08-15T21:01:22.102000-07:00 | lacework/datacollector:latest-sidecar |
| 2022-08-15T21:01:21.743000-07:00 | 999999999999.dkr.ecr.us-east-1.amazonaws.com/bar:prod |
| 2022-08-15T21:02:02.298000-07:00 | 999999999999.dkr.ecr.us-east-1.amazonaws.com/bar:prod |
| 2022-08-15T21:02:31.743000-07:00 | 999999999999.dkr.ecr.us-east-1.amazonaws.com/bar:prod |
+-----------------------------------+---------------------------------------------------------+
Can I filter the list to keep the primary containers only, or at least sort containers in some consistent way?
A possible solution to that is to query the list by images starting with your ECR account ID.
I made it work like this:
aws ecs describe-tasks \
--cluster yourClusterName \
--output table \
--query 'tasks[] | [].[startedAt,containers[?starts_with(image, to_string(`999999999999`))].image]' \
--tasks `aws ecs list-tasks --desired-status RUNNING --query taskArns --cluster yourClusterName --output text`
Which produces an output like this one:
------------------------------------------------------------------------------
| DescribeTasks |
+----------------------------------------------------------------------------+
| 2022-10-24T17:29:16.003000+02:00 |
| 999999999999.dkr.ecr.us-east-2.amazonaws.com/business:v0.9.1 |
| 2022-10-19T17:53:46.015000+02:00 |
| 999999999999.dkr.ecr.us-east-2.amazonaws.com/datacore:v0.5.1 |
| 2022-10-24T17:30:05.670000+02:00 |
| 999999999999.dkr.ecr.us-east-2.amazonaws.com/application:v0.16.2 |
| 2022-10-24T18:53:31.795000+02:00 |
| 999999999999.dkr.ecr.us-east-2.amazonaws.com/frontend:development-v1.9.7 |
+----------------------------------------------------------------------------+
I wasn't able to fix the format of the output. JMESPath is not really my thing.

aws describe instances using jq cli

#!/bin/bash
SECURITY_GROUP_ID="$(aws ec2 describe-security-groups | jq -r ' .SecurityGroups[] | select(.IpPermissions[] | .FromPort == 22 and .IpRanges[].CidrIp == "0.0.0.0/0") | .GroupId')"
aws ec2 describe-instances \
--filters "Name=network-interface.group-id,Values=${SECURITY_GROUP_ID}" \
| jq -r ".Reservations | .[] | .Instances | .[] | .InstanceId"
this is giving empty output with job showing as succeeded. The actual output should be list of all ec2 instances with security groups meeting the IpPermissions criteria. Can anyone correct this script?
Thanks
#!/bin/bash
SECURITY_GROUP_ID="$(aws ec2 describe-security-groups | jq -r ' .SecurityGroups[] | select(.IpPermissions[] | .FromPort == 22 and .IpRanges[].CidrIp == "0.0.0.0/0") | .GroupId')"
aws ec2 describe-instances \
--filters "Name=instance.group-id,Values=`echo -n $SECURITY_GROUP_ID | tr '\n' ','`" \
| jq -r ".Reservations | .[] | .Instances | .[] | .InstanceId"
I tried using ‘for’ loop in the script and describe the instances using filter and query option inside ‘for’ loop. So it will take the output from describe security groups and used it as variable in ‘for’ loop to query instances as given below.
aws ec2 describe-instances --filters Name=network-interface.group-id,Values=$sg --query …………
this worked well if you want to list instances with particular security group ids

Launch an openstack instance with two NIC

I want to launch a instance with two vNic card . On one vnic i want use for private network and other vNic use for public network, How we can do that in openstack ?
you need to boot the image with twice the --nic net-id= argument as in the example bellow:
nova boot --image cirros-0.3.3-x86_64 --flavor tiny_ram_small_disk --nic net-id=b7ab2080-a71a-44f6-9f66-fde526bb73d3 --nic net-id=120a6fde-7e2d-4856-90ee-5609a5f3035f --security-group default --key-name bob-key CIRROSone
here is the result
root#columbo:~# nova list
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------------+
| d75ef5b3-060d-4ec0-9ddf-a3685a7f1199 | CIRROSone | ACTIVE | - | Running | SecondVlan=5.5.5.4; SERVER_VLAN_1=10.255.1.16 |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------------+

Cloud Foundry router cannot find api.xx.xxxx.com/info (AWS)

Finally managed to successfully deploy cloud foundry to AWS.
Mostly following instructions from http://docs.cloudfoundry.org/deploying/ec2/bootstrap-aws-vpc.html
Its failing at the validation step that is to get a success response for the following:
curl api.subdomain.domain/info
Of course I have substituted the subdomain and domain appropriately.
I am getting the error:
404 Not Found: Requested route ('api.XX.XXXXX.com') does not exist.
The request is coming till the Cloud foundry router router_z1. And I can see this error in the logs for router_z1.
Here is output of my bosh vms command:
------------------------------------+---------+---------------+--------------+
| Job/index | State | Resource Pool | IPs |
+------------------------------------+---------+---------------+--------------+
| unknown/unknown | running | medium_z1 | 10.10.16.254 |
| unknown/unknown | running | medium_z2 | 10.10.81.4 |
| unknown/unknown | running | small_errand | 10.10.17.1 |
| unknown/unknown | running | small_errand | 10.10.17.0 |
| api_worker_z1/0 | running | small_z1 | 10.10.17.20 |
| api_z1/0 | running | large_z1 | 10.10.17.18 |
| clock_global/0 | running | medium_z1 | 10.10.17.19 |
| etcd_z1/0 | running | medium_z1 | 10.10.16.20 |
| hm9000_z1/0 | running | medium_z1 | 10.10.17.21 |
| loggregator_trafficcontroller_z1/0 | running | small_z1 | 10.10.16.34 |
| loggregator_z1/0 | running | medium_z1 | 10.10.16.31 |
| login_z1/0 | running | medium_z1 | 10.10.17.17 |
| nats_z1/0 | running | medium_z1 | 10.10.16.11 |
| router_z1/0 | running | router_z1 | 10.10.16.15 |
| runner_z1/0 | running | runner_z1 | 10.10.17.22 |
| stats_z1/0 | running | small_z1 | 10.10.17.15 |
| uaa_z1/0 | running | medium_z1 | 10.10.17.16 |
+------------------------------------+---------+---------------+--------------+
The only change that I made in the CF deployment manifest was to eliminate instance from zone 2. The reason being AWS default limit for number of instances on EC2 in a particular region is 20.
Any pointers on how to resolve this issue will be appreciated.
Figured out the problem. Couple of issues:
In the CF deployment manifest make sure the system domain property
is <BOSH_VPC_SUBDOMAIN>.<BOSH_VPC_DOMAIN>. That is if you have
reserved cf.example.com for cloud foundry PaaS. Make sure
cf.example.com is what system_domain property in your cloud
foundry deployment manifest refers to. Infact example.com should
not appear in your deployment manifest anywhere without cf..
Through out the deployment manifest it is always cf.example.com
Do not use '#' in any of the passwords within the deployment
manifest. I have logged a bug for this in cf-releases:
https://github.com/cloudfoundry/cf-release/issues/527

How to view cloudfoundry logs when cf login fail

I have used bosh-lite to deploy a single node cloudfoundry in my development environment. After deployment, I run the bosh vms, and it returns the vms list:
+------------------------------------+---------+---------------+--------------+
| Job/index | State | Resource Pool | IPs |
+------------------------------------+---------+---------------+--------------+
| api_z1/0 | running | large_z1 | 10.244.0.138 |
| etcd_leader_z1/0 | running | medium_z1 | 10.244.0.38 |
| ha_proxy_z1/0 | running | router_z1 | 10.244.0.34 |
| hm9000_z1/0 | running | medium_z1 | 10.244.0.142 |
| loggregator_trafficcontroller_z1/0 | running | small_z1 | 10.244.0.10 |
| loggregator_z1/0 | running | medium_z1 | 10.244.0.14 |
| login_z1/0 | running | medium_z1 | 10.244.0.134 |
| nats_z1/0 | running | medium_z1 | 10.244.0.6 |
| postgres_z1/0 | running | medium_z1 | 10.244.0.30 |
| router_z1/0 | running | router_z1 | 10.244.0.22 |
| runner_z1/0 | running | runner_z1 | 10.244.0.26 |
| uaa_z1/0 | running | medium_z1 | 10.244.0.130 |
+------------------------------------+---------+---------------+--------------+
But when I try to use "cf api https://api.10.244.0.34.xip.io --skip-ssl-validation" to connect the cloudfoundry, it returns an error:
ConnectEx tcp: No connection could be made because the target machine
actively refused it.
The log information is very general (actually this is the exception from CF client which is written in .net), and doesn't provide useful information.
My question is, which VM handles the api command? And, where can I find the detail log in that VM?
api_z1/0 is handling the command. You can get its logs via the BOSH CLI itself: bosh logs api_z1 0 --all.
You probably also need to add the route to your local route table so that traffic to HAProxy container at 10.244.0.24 knows to go through the BOSH-lite VM at 192.168.50.4. Run bin/add-route or bin/add-route.bat from the root of your BOSH-lite repo.