SSH Expired on GCP VM Instance - google-cloud-platform

I have created a vm instance which connects to the external ip with http but not with https.
On checking the logs, it shows that the following error:
Invalid ssh key entry - expired key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJdFE+rHGtkgTx0niNZRTQYb...........
......jH8ycULWLplemekTGdFnwoGhNb google-ssh {"userName":"my_user_name#gmail.com","expireOn":"2022-12-06T17:04:46+0000"}
Does anyone know why is this happening or how to resolve this? I have spent at least 10 hrs trying to resolve this issue but I have been unsuccessful as I am not from a technical field.
I tried
creating a new ssh key - firstly I had never done that - and then updating it in the meta data or through the console terminal etc.
I tried adding a new ssh key directly too but that didn't work
Edit
Ran the following as per the comment:
gcloud compute project-info describe --format flattened
Result below:
commonInstanceMetadata.fingerprint: mgT7F7wYfBw=
commonInstanceMetadata.items[0].key: ssh-keys
commonInstanceMetadata.items[0].value: himanshusomani007:ssh-rsa AAAAB3NzaC1yc2EAAAADAQA..........cULWLplemekTGdFnwoGhNb google-ssh {"userName":"my_user_name#gmail.com","expireOn":"2023-12-04T17:04:46+0000"}
commonInstanceMetadata.kind: compute#metadata
creationTimestamp: 2022-11-17T00:15:29.195-08:00
defaultNetworkTier: PREMIUM
defaultServiceAccount: 1054284009344-compute#developer.gserviceaccount.com
id: 3401782412795466575
kind: compute#project
name: principal-storm-368908
quotas[0].limit: 1000.0
quotas[0].metric: SNAPSHOTS
quotas[0].usage: 0.0
quotas[1].limit: 5.0
quotas[1].metric: NETWORKS
quotas[1].usage: 1.0
quotas[2].limit: 100.0
quotas[2].metric: FIREWALLS
quotas[2].usage: 9.0
quotas[3].limit: 100.0
quotas[3].metric: IMAGES
quotas[3].usage: 0.0
quotas[4].limit: 8.0
quotas[4].metric: STATIC_ADDRESSES
quotas[4].usage: 0.0
quotas[5].limit: 200.0
quotas[5].metric: ROUTES
quotas[5].usage: 1.0
quotas[6].limit: 15.0
quotas[6].metric: FORWARDING_RULES
quotas[6].usage: 0.0
quotas[7].limit: 50.0
quotas[7].metric: TARGET_POOLS
quotas[7].usage: 0.0
quotas[8].limit: 50.0
quotas[8].metric: HEALTH_CHECKS
quotas[8].usage: 0.0
quotas[9].limit: 8.0
quotas[9].metric: IN_USE_ADDRESSES
quotas[9].usage: 0.0
quotas[10].limit: 50.0
quotas[10].metric: TARGET_INSTANCES
quotas[10].usage: 0.0
quotas[11].limit: 10.0
quotas[11].metric: TARGET_HTTP_PROXIES
quotas[11].usage: 0.0
quotas[12].limit: 10.0
quotas[12].metric: URL_MAPS
quotas[12].usage: 0.0
quotas[13].limit: 50.0
quotas[13].metric: BACKEND_SERVICES
quotas[13].usage: 0.0
quotas[14].limit: 100.0
quotas[14].metric: INSTANCE_TEMPLATES
quotas[14].usage: 0.0
quotas[15].limit: 5.0
quotas[15].metric: TARGET_VPN_GATEWAYS
quotas[15].usage: 0.0
quotas[16].limit: 10.0
quotas[16].metric: VPN_TUNNELS
quotas[16].usage: 0.0
quotas[17].limit: 3.0
quotas[17].metric: BACKEND_BUCKETS
quotas[17].usage: 0.0
quotas[18].limit: 10.0
quotas[18].metric: ROUTERS
quotas[18].usage: 0.0
quotas[19].limit: 10.0
quotas[19].metric: TARGET_SSL_PROXIES
quotas[19].usage: 0.0
quotas[20].limit: 10.0
quotas[20].metric: TARGET_HTTPS_PROXIES
quotas[20].usage: 0.0
quotas[21].limit: 10.0
quotas[21].metric: SSL_CERTIFICATES
quotas[21].usage: 0.0
quotas[22].limit: 100.0
quotas[22].metric: SUBNETWORKS
quotas[22].usage: 0.0
quotas[23].limit: 10.0
quotas[23].metric: TARGET_TCP_PROXIES
quotas[23].usage: 0.0
quotas[24].limit: 32.0
quotas[24].metric: CPUS_ALL_REGIONS
quotas[24].usage: 1.0
quotas[25].limit: 10.0
quotas[25].metric: SECURITY_POLICIES
quotas[25].usage: 0.0
quotas[26].limit: 100.0
quotas[26].metric: SECURITY_POLICY_RULES
quotas[26].usage: 0.0
quotas[27].limit: 1000.0
quotas[27].metric: XPN_SERVICE_PROJECTS
quotas[27].usage: 0.0
quotas[28].limit: 20.0
quotas[28].metric: PACKET_MIRRORINGS
quotas[28].usage: 0.0
quotas[29].limit: 100.0
quotas[29].metric: NETWORK_ENDPOINT_GROUPS
quotas[29].usage: 0.0
quotas[30].limit: 6.0
quotas[30].metric: INTERCONNECTS
quotas[30].usage: 0.0
quotas[31].limit: 5000.0
quotas[31].metric: GLOBAL_INTERNAL_ADDRESSES
quotas[31].usage: 0.0
quotas[32].limit: 5.0
quotas[32].metric: VPN_GATEWAYS
quotas[32].usage: 0.0
quotas[33].limit: 100.0
quotas[33].metric: MACHINE_IMAGES
quotas[33].usage: 0.0
quotas[34].limit: 20.0
quotas[34].metric: SECURITY_POLICY_CEVAL_RULES
quotas[34].usage: 0.0
quotas[35].limit: 0.0
quotas[35].metric: GPUS_ALL_REGIONS
quotas[35].usage: 0.0
quotas[36].limit: 5.0
quotas[36].metric: EXTERNAL_VPN_GATEWAYS
quotas[36].usage: 0.0
quotas[37].limit: 1.0
quotas[37].metric: PUBLIC_ADVERTISED_PREFIXES
quotas[37].usage: 0.0
quotas[38].limit: 10.0
quotas[38].metric: PUBLIC_DELEGATED_PREFIXES
quotas[38].usage: 0.0
quotas[39].limit: 128.0
quotas[39].metric: STATIC_BYOIP_ADDRESSES
quotas[39].usage: 0.0
quotas[40].limit: 10.0
quotas[40].metric: NETWORK_FIREWALL_POLICIES
quotas[40].usage: 0.0
quotas[41].limit: 15.0
quotas[41].metric: INTERNAL_TRAFFIC_DIRECTOR_FORWARDING_RULES
quotas[41].usage: 0.0
quotas[42].limit: 15.0
quotas[42].metric: GLOBAL_EXTERNAL_MANAGED_FORWARDING_RULES
quotas[42].usage: 0.0
selfLink: https://www.googleapis.com/compute/v1/projects/principal-storm-368908
vmDnsSetting: ZONAL_ONLY
xpnProjectStatus: UNSPECIFIED_XPN_PROJECT_STATUS

From the error shared on your description the ssh key expired.
You need to create a new rsa key
ssh-keygen
Then copy the new public ssh key in the VM instance section ssh key.
Please! Copy the public key not the private, you can identify the public with the extension .pub
Follow the instruction from here

Related

how to tell which ini file the current uwsgi process is using?

In an operating website with Nginx, Uwsgi and Django. and it has tons of venv and django projects. Is it any way I can tell with .ini file the uwsgi loaded?
I ran "ps aux | grep uwsgi" and it shows this:
ubuntu 2136 0.0 0.4 108280 32872 pts/1 S+ Mar29 22:58 uwsgi repository.ini
ubuntu 9337 0.0 0.4 111312 34836 pts/1 S+ Jul11 0:05 uwsgi repository.ini
ubuntu 9893 0.0 0.4 111572 34836 pts/1 S+ Jul11 0:07 uwsgi repository.ini
ubuntu 9980 0.0 0.4 300744 37496 pts/6 Sl+ Jul11 0:07 uwsgi repository.ini
ubuntu 12442 0.1 0.4 300752 37520 pts/6 Sl+ Jul11 0:07 uwsgi repository.ini
ubuntu 12663 0.1 0.4 111548 34872 pts/1 S+ Jul11 0:08 uwsgi repository.ini
ubuntu 15462 0.1 0.4 300752 37520 pts/6 Sl+ 00:22 0:05 uwsgi repository.ini
ubuntu 15767 0.1 0.4 111568 34852 pts/1 S+ 00:25 0:09 uwsgi repository.ini
ubuntu 17740 0.1 0.4 300752 37524 pts/6 Sl+ 00:43 0:05 uwsgi repository.ini
ubuntu 18874 0.0 0.4 107356 33944 pts/5 S+ May15 2:02 uwsgi repository.ini
ubuntu 18876 0.0 0.4 110272 33856 pts/5 S+ May15 0:00 uwsgi repository.ini
ubuntu 18877 0.0 0.4 110368 34068 pts/5 S+ May15 0:00 uwsgi repository.ini
ubuntu 20763 0.1 0.4 300744 37504 pts/6 Sl+ 01:12 0:04 uwsgi repository.ini
ubuntu 22143 0.0 0.4 301004 37716 pts/6 Sl+ Jul11 0:10 uwsgi repository.ini
ubuntu 25620 0.0 0.0 13772 1104 pts/0 S+ 01:54 0:00 grep --color=auto uwsgi
ubuntu 25915 0.0 0.4 301132 38492 pts/6 Sl+ Jul11 0:11 uwsgi repository.ini
ubuntu 27713 0.0 0.4 300756 37508 pts/6 Sl+ Jul11 0:10 uwsgi repository.ini
ubuntu 28648 0.0 0.3 92948 29528 pts/4 S+ May15 2:02 uwsgi repository.ini
ubuntu 28650 0.0 0.4 300576 36920 pts/4 Sl+ May15 0:01 uwsgi repository.ini
ubuntu 28651 0.0 0.4 300484 36812 pts/4 Sl+ May15 0:00 uwsgi repository.ini
ubuntu 30146 0.0 0.3 93864 31336 pts/6 S+ May15 12:38 uwsgi repository.ini
ubuntu 30187 0.0 0.4 113104 36372 pts/1 S+ Jul11 0:07 uwsgi repository.ini
ubuntu 30910 0.0 0.4 113088 36492 pts/1 S+ Jul11 0:07 uwsgi repository.ini
ubuntu 32262 0.0 0.4 112852 36404 pts/1 S+ Jul11 0:06 uwsgi repository.ini
ubuntu 32618 0.0 0.4 113100 36756 pts/1 S+ Jul11 0:08 uwsgi repository.ini
but I could not tell which repository.ini is running.
If you have run your application following the docs you can do
ps aux | grep uwsgi and should see a list of uwsgi instances (if you hacve multiple) and their corresponding ini file. you can then look up the .ini file to check which is running which

Return am Eigen::Tensor slice from a function

I would like to write functions that return slices of Eigen::Tensor. In the real code, getSlice() takes some integers and the extent and offset are calculated. I would like my functions to return a view into the array so that I can access the array for reading and writing without copying.
I can create a variable that is a slice of my array and alter the data. But when I return the same slice from a function the values are not altered. I am guessing that the function generates a new array as the return value. How do I return the slice I need? Or should I do this a different way?
#include <iostream>
#include <Eigen/Dense>
#include <unsupported/Eigen/CXX11/Tensor>
Eigen::Tensor<float,3> getSlice(Eigen::Tensor<float,3>& a,
Eigen::array<long,3>& offset,
Eigen::array<long,3>& extent)
{
return a.slice(offset,extent);
}
int main()
{
Eigen::Tensor<float,3> et = Eigen::Tensor<float,3>(3,5,4);
et.setConstant(1.1);
std::cout << et << std::endl;
Eigen::array<long,3> offset = {0,0,0};
Eigen::array<long,3> extent = {2,2,1};
et.slice(offset,extent).setConstant(2.2);
std::cout << "Set slice constant" << std::endl;
std::cout << et << std::endl;
auto sl = et.slice(offset,extent);
sl.setConstant(3.3);
std::cout << "Set slice constant from slice instance." << std::endl;
std::cout << et << std::endl;
getSlice(et,offset,extent).setConstant(4.4);
std::cout << "Set slice constant from function." << std::endl;
std::cout << et << std::endl;
}
Program output:
$ ./ta
1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
Set slice constant
2.2 2.2 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
2.2 2.2 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
Set slice constant from slice instance.
3.3 3.3 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
3.3 3.3 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
Set slice constant from function.
3.3 3.3 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
3.3 3.3 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1
Your observation that your implementation of getSlice returns a new Tensor object (with a copy of the original data) is correct. In your case the simplest solution is to change the return type to auto (even though, you should generally be careful with auto and Eigen):
inline auto getSlice(Eigen::Tensor<float,3>& a,
Eigen::array<long,3>& offset,
Eigen::array<long,3>& extent)
{
return a.slice(offset,extent);
}
Live-Demo: https://godbolt.org/z/tLWYUz

broadcast_rpc_address in 3 seed and 3 non-seed node cassandra cluster deployed on AWS

What should i set broadcast_rpc_address in 3 seed and 3 non-seed node cassandra cluster deployed on AWS?
rpc address is set to wildcard 0.0.0.0,
Seed nodes launched using static ENI,
Non seed nodes launched using ASG,
ALL the nodes are launched in private subnet which are able to connect to internet using NAT gateway.
Added cassandra.yaml file which i am using
cluster_name: 'Cassandra Cluster'
num_tokens: 256
hinted_handoff_enabled: true
max_hint_window_in_ms: 10800000
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
permissions_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
- /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
disk_failure_policy: stop
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "seednode-A-IP,seednode-B-IP,seednode-C-IP"
concurrent_reads: 32
concurrent_writes: 32
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 10.8.9.83
start_native_transport: true
native_transport_port: 9042
start_rpc: true
rpc_address: 0.0.0.0
broadcast_rpc_address: NAT-GATEWAY-IP
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
column_index_size_in_kb: 64
compaction_throughput_mb_per_sec: 16
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
cross_node_timeout: false
endpoint_snitch: Ec2Snitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
keystore: conf/.keystore
keystore_password: cassandra
internode_compression: all
inter_dc_tcp_nodelay: false
Okie, found the issue,
In cassandra versions 3.o and above cql uses 9042 for connection , after changing the LB listener port from 9160 to 9042, it worked

what address should i use for broadcast_rpc_address in cassandra.yaml

cluster_name: 'Cassandra Cluster'
num_tokens: 256
hinted_handoff_enabled: true
max_hint_window_in_ms: 10800000
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
permissions_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
- /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
disk_failure_policy: stop
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "seednode-A-IP,seednode-B-IP,seednode-C-IP"
concurrent_reads: 32
concurrent_writes: 32
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 10.8.9.83
start_native_transport: true
native_transport_port: 9042
start_rpc: true
rpc_address: 0.0.0.0
broadcast_rpc_address: NAT-GATEWAY-IP
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
column_index_size_in_kb: 64
compaction_throughput_mb_per_sec: 16
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
cross_node_timeout: false
endpoint_snitch: Ec2Snitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
keystore: conf/.keystore
keystore_password: cassandra
internode_compression: all
inter_dc_tcp_nodelay: false
we have Cassandra cluster deployed on AWS with 3 seed nodes (static ENI attached) and 3 non seed nodes in Auto scaling group.
I have rpc_address set to 0.0.0.0, can someone tell me what should be the broadcast_rpc_address in cassandra.yaml file?
I was using cassandra 2.0.7 version before and was able to connect to cluster fine with just rpc_address as 0.0.0.0 and without broadcast_rpc_address this unset, but when i upgraded to 3.11.1 it is giving me error
CassandraDaemon.java:708 - Exception encountered during startup: If rpc_address is set to a wildcard address (0.0.0.0), then you must set broadcast_rpc_address to a value other than 0.0.0.0
If you are using Cassandra 2.1 or greater, you can configure broadcast_rpc_address. you can configure broadcast_rpc_address to be public IP.
Cassandra uses the "broadcast_address/listen_address" for internode connectivity and "broadcast_rpc_address/rpc_address" for the rpc interface (client -> coordinator (Cassandra node) requests).

WSGI using more daemon processes than it should?

So i set up a WSGI server running python/django code, and stuck the following in my httpd.conf file:
WSGIDaemonProcess mysite.com processes=2 threads=15 user=django group=django
However, when i go to the page and hit "refresh" really really quickly, it seems that i am getting way more than two processes:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21042 django 20 0 975m 36m 4440 S 98.9 6.2 0:15.63 httpd
1017 root 20 0 67688 2352 740 S 0.3 0.4 0:10.50 sendmail
21041 django 20 0 974m 40m 4412 S 0.3 6.7 0:16.36 httpd
21255 django 20 0 267m 8536 2036 S 0.3 1.4 0:01.02 httpd
21256 django 20 0 267m 8536 2036 S 0.3 1.4 0:00.01 httpd
I thought setting processes=2 would limit it to two processes. Is there something i'm missing?