I have created a managed Redis at GCP. from GCP console I can view metrics like connected clients/ blocked clients; memory usage/max memory, etc.
How can I get these metrics results using gcloud CLI? Furthermore, how can I get more details of each connection like <client_ip:port> the Redis connects to, etc.?
Not sure if it will help you but check the official documentation
To see each connection Redis connects to, I believe it's not possible from gcloud, only using unix programs like netstat inside VM.
Related
I'd like to host an app that uses a database connection in an AWS Nitro enclave.
I understand that the Nitro enclave doesn't have access to a network or persistent storage, and the only way that it can communicate with its parent instance is through the vsock.
There are some examples showing how to configure a connection from the enclave to an external url through a secure channel using the vsock and vsock proxy, but the examples focus on AWS KMS operations.
I'd like to know if it's possible to configure the secure channel through the vsock and vsock proxy to connect to a database like postgres/mysql etc...
If this is indeed possible, are there perhaps some example cofigurations somewhere?
Nitrogen is an easy solution for this, and it's completely open source (disclosure I'm one of the contributors to Nitrogen).
You can see an example configuration for deploying Redis to a Nitro Enclave here.
And a more detailed blog post walkthrough of deploying any Docker container to a Nitro Enclave here.
Nitrogen is a command line tool with three main commands:
Setup - Spawn an EC2 instance, configure SSH, and establish a VSOCK proxy for interacting with the Nitro Enclave.
Build - Create a Docker image from an arbitrary Dockerfile, and convert it to the Enclave Image File (EIF) format expected by Nitro.
Upload your EIF and launch it as a Nitro Enclave. You receive a hostname and port which is ready to proxy enclave requests to your service.
You can setup, build, and deploy any Dockerfile in a few minutes to your own AWS account.
I would recommend looking into Anjuna Security's offering: https://www.anjuna.io/amazon-nitro-enclaves
Outside of using Anjuna, you could look into the AWS Nitro SDK and use that to build a networking stack to utilize the vsock or modify an existing sample.
We have setup a fluentd agent on a GCP VM to push logs from syslog server (the VM) to GCP's Google Cloud Logging. The current setup is working fine and is pushing more than 300k log entries to Stackdriver (Google Cloud Logging) per hour.
Due to increased traffic, we are planning to increase the number of VMs employed behind the load balancer. However, the new VM with fluentd agent is not being able to push logs to Stackdriver. After the first time activation of VM, it does send a few entries to Stackdriver and after that, it does not work.
I tried below options to setup the fluentd agent and to resolve the issue:
Create a new VM from scratch and install fluentd logging agent using this Google Cloud documentation.
Duplicate the already working VM (with logging agent) by creating Images
Restart the VM
Reinstall the logging agent
Debugging I did:
All the configurations for google fluentd agent. Everything is correct and is also exactly similar to the currently working VM instance.
I checked the "/var/log/google-fluentd/google-fluentd.log" for any logging errors. But there are none.
Checked if the logging API is enabled. As there are already a few million logs per day, I assume we are fine on that front.
Checked the CPU and memory consumption. It is close to 0.
All the solutions I could find on Google (there are not many)
It would be great if someone can help me identify where exactly I am going wrong. I have checked configurations/setup files multiple times and they look fine.
Troubleshooting steps to resolve the issue:
Check whether you are using the latest version of the fluentd agent or not. If not, try upgrading the fluentd agent. Refer to upgrade the agent for information.
If you are running very old Compute Engine instances or Compute Engine instances created without the default credentials you must complete the Authorizing the agent procedures.
Another point to focus is, how you are Configuring an HTTP Proxy. If you are using an HTTP proxy for proxying requests to the Logging and Monitoring APIs, check whether the metadata server is reachable. The metadata server has to be reachable (and do it directly; no proxy) when Configuring an HTTP Proxy.
Check if you have any log exclusions configured which is preventing the logs from arriving. Refer Exclusion filters for information.
Try uninstalling the Fluentd agent and try to use Ops agent instead (note that syslog logs are collected by it with no setup) and check whether you were able to see the logs. Combining logging and metrics into a single agent, the Ops Agent uses Fluent Bit for logs, which supports high-throughput logging, and the OpenTelemetry Collector for metrics. Refer Ops agent for more information.
I am trying to manage a Beaglebones/Raspberry-pis using AWS System Manager.
I registered it on the AWS System Manager shown in the pic below.
However, it does not appear in the Session Manager tab
In the Manage Instances tab I can try the Start Session action.
When I try to start the session like the image above, I get the error shown below which is not all that helpful.
Does anyone know how to make the session manager work with hardware like beaglebones or raspberry-pis? Would it be a matter that AWS would only support Sessions with EC2 instances? Or maybe it is incompatible with the Beaglebone/Raspberry?
Session Manager currently only supports connections to EC2 instances. On-premises managed instances (which is what a Raspberry is after it's registered) are not supported by Session Manager today.
Other features of AWS Systems Manager will work - you can, for example, use Run Command to execute actions on your Raspberry instance.
Has anybody tried to connect AWS Elasticache Redis (cluster mode disabled) to use with SignalR? I see there are some serious configuration issues and limitations with AWS Redis.
1) We are trying to use Redis as a backplane for signalr,
//GlobalHost.DependencyResolver.UseRedis("xxxxxx.0001.use1.cache.amazonaws.com:6379", 6379, "", "Performance");
It has to be as simple as this as per docs, I get socket failure on Ping when I try to connect. (I have seen posts about this with Windows azure, but could not find any help articles with AWS)
2) Should the cluster mode have to enabled ? as with cluster mode disabled, we need to use the replica end points for reading, and signalr does not know this ?
Thanks in advance.
We finally resolved, by removing the clusters and making a standalone AWS Redis.
The other issue we had it was assigned to the wrong security group, so we had changed it to the same as our EC2 instances.
You will still need to include ":6379" while accessing the DB.
However, if you are using dependency resolver for signalr you should not include ":6379" as the access point, but if you use the redis for read and write operations using StackExchange.Redis then you need to include ":6379" in the request.
This note (https://learn.microsoft.com/en-us/aspnet/signalr/overview/performance/scaleout-with-redis) says "SignalR scaleout with Redis does not support Redis clusters.".
Also, perhaps remove ":6379" from the server and only have 6379 in the port?
Currently, I'm trying to monitor an AWS RDS DB Instance (MySQL MariaDB) with Zabbix but I'm experiencing several troubles:
I have the script placed into the externalscripts folder on Zabbix Server and template (https://github.com/datorama/zabbix_rds_template) with my aws access data properly filled. The host is already added as well but Zabbix doesn't retrieve data from the AWS RDS Instance (all graphs display no data See the graphs ).
How could I check if the zabbix Server is able to reach the RDS instance to start to discard things?
Do anyone know the correct way to add AWS RDS host in Zabbix?
Any suggestion or advice is welcome always.
Thanks in advance
Kind Regards
KV.
Have you check the Zabbix Log? Maybe you don't use a user with enough permission. Try to run the script in the Zabbix server shell.
Have your Zabbix Server in AWS? if yes, set it a role with read access to CloudWatch and RDS.
Also, I don't like use credentials in a script, I prefer configure it with AWS CLI.
http://docs.aws.amazon.com/es_es/cli/latest/userguide/cli-chap-getting-started.html
Here, there is another example to monitoring any AWS Cloudwatch metrics with Zabbix that I use.
https://www.dbigcloud.com/cloud-computing/230-integrando-metricas-de-aws-cloudwatch-en-zabbix.html