I've been trying to change the ec2 instance timezone to IST but following the aws docs isn't helping at all.
ls /usr/share/zoneinfo/Asia
Aden Atyrau Brunei Damascus Hebron Jerusalem Kolkata Makassar Phnom_Penh Saigon Tashkent Ujung_Pandang Yangon
Almaty Baghdad Calcutta Dhaka Ho_Chi_Minh Kabul Krasnoyarsk Manila Pontianak Sakhalin Tbilisi Ulaanbaatar Yekaterinburg
Amman Bahrain Chita Dili Hong_Kong Kamchatka Kuala_Lumpur Muscat Pyongyang Samarkand Tehran Ulan_Bator Yerevan
Anadyr Baku Choibalsan Dubai Hovd Karachi Kuching Nicosia Qatar Seoul Tel_Aviv Urumqi
Aqtau Bangkok Chongqing Dushanbe Irkutsk Kashgar Kuwait Novokuznetsk Qostanay Shanghai Thimbu Ust-Nera
Aqtobe Barnaul Chungking Famagusta Istanbul Kathmandu Macao Novosibirsk Qyzylorda Singapore Thimphu Vientiane
Ashgabat Beirut Colombo Gaza Jakarta Katmandu Macau Omsk Rangoon Srednekolymsk Tokyo Vladivostok
Ashkhabad Bishkek Dacca Harbin Jayapura Khandyga Magadan Oral Riyadh Taipei Tomsk Yakutsk
sudo vi /etc/sysconfig/clock
ZONE="Asia/Calcutta"
UTC=true
I edited the file to the required timezone and linked it to local time
sudo ln -sf /usr/share/zoneinfo/Asia/Calcutta /etc/localtime
Rebooted the machine and check date only to see the below
Mon Sep 16 16:06:13 UTC 2019
Did this a few times with also changing the Zone to Kolkata, nothing changes. Any suggestions would be really helpful.
The simplest way to change EC2 timezone is to run the following command when logged in
$ sudo dpkg-reconfigure tzdata
this will open screen to select geographical areas, use enter to select and further select city and enter.
This will change the timezone for your currently logged in EC2 instance.
for instance setting for Europe Amsterdam timezone. (when run as root)
ln -sf /usr/share/zoneinfo/Europe/Amsterdam /etc/localtime
echo -e "ZONE="Europe/Amsterdam"\nUTC=true”>/etc/sysconfig/clock
reboot
this worked for me , but it requires a reboot.
Related
When trying to SSH to GCE VMs using metadata-based SSH keys I get the following error:
ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
While troubleshooting I can see the keys in the instance metadata, but they are not being added to the user's authorized_keys file:
$ curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/attributes/ssh-keys"
username:ssh-ed25519 AAAAC3NzaC....omitted....
admin:ssh-ed25519 AAAAC3NzaC....omitted....
$ sudo ls -hal /home/**/.ssh/
/home/ubuntu/.ssh/:
total 8.0K
drwx------ 2 ubuntu ubuntu 4.0K Aug 11 23:19 .
drwxr-xr-x 3 ubuntu ubuntu 4.0K Aug 11 23:19 ..
-rw------- 1 ubuntu ubuntu 0 Aug 11 23:19 authorized_keys
# Only result is the default zero-length file for ubuntu user
I also see the following errors in the ssh server auth log and Google Guest Environment services:
$ sudo less /var/log/auth.log
Aug 11 23:28:59 test-vm sshd[2197]: Invalid user admin from 1.2.3.4 port 34570
Aug 11 23:28:59 test-vm sshd[2197]: Connection closed by invalid user admin 1.2.3.4 port 34570 [preauth]
$ sudo journalctl -u google-guest-agent.service
Aug 11 22:24:42 test-vm oslogin_cache_refresh[907]: Refreshing passwd entry cache
Aug 11 22:24:42 test-vm oslogin_cache_refresh[907]: Refreshing group entry cache
Aug 11 22:24:42 test-vm oslogin_cache_refresh[907]: Failure getting groups, quitting
Aug 11 22:24:42 test-vm oslogin_cache_refresh[907]: Failed to get groups, not updating group cache file, removing /etc/oslogin_group.cache.bak.
# or
Aug 11 23:19:37 test-vm GCEGuestAgent[766]: 2022-08-11T23:19:37.6541Z GCEGuestAgent Info: Creating user admin.
Aug 11 23:19:37 test-vm useradd[885]: failed adding user 'admin', data deleted
Aug 11 23:19:37 test-vm GCEGuestAgent[766]: 2022-08-11T23:19:37.6869Z GCEGuestAgent Error non_windows_accounts.go:144:
Error creating user: useradd: group admin exists - if you want to add this user to that group, use -g.
Currently the latest cloud-init and guest-oslogin packages for Ubuntu 20.04.4 LTS (focal) seem to have an issue that causes google-guest-agent.service to exit before completing its task. The issue was fixed and committed but not yet released for focal (and likely other Ubuntu versions).
For now you can try disabling OS Login by setting instance or project metadata enable-oslogin=FALSE. After which you should see the expected results and be able to SSH using those keys:
$ sudo journalctl -u google-guest-agent.service
Aug 11 23:10:33 test-vm GCEGuestAgent[761]: 2022-08-11T23:10:33.0517Z GCEGuestAgent Info: Created google sudoers file
Aug 11 23:10:33 test-vm GCEGuestAgent[761]: 2022-08-11T23:10:33.0522Z GCEGuestAgent Info: Creating user username.
Aug 11 23:10:33 test-vm useradd[881]: new group: name=username, GID=1002
Aug 11 23:10:33 test-vm useradd[881]: new user: name=username, UID=1001, GID=1002, home=/home/username, shell=/bin/bash, from=none
Aug 11 23:10:33 test-vm gpasswd[895]: user username added by root to group ubuntu
Aug 11 23:10:33 test-vm gpasswd[904]: user username added by root to group adm
Aug 11 23:10:33 test-vm gpasswd[983]: user username added by root to group google-sudoers
Aug 11 23:10:33 test-vm GCEGuestAgent[761]: 2022-08-11T23:10:33.7615Z GCEGuestAgent Info: Updating keys for user username.
$ sudo ls -hal /home/username/.ssh/
/home/username/.ssh/:
total 12K
drwx------ 2 username username 4.0K Aug 11 23:19 .
drwxr-xr-x 4 username username 4.0K Aug 11 23:35 ..
-rw------- 1 username username 589 Aug 11 23:19 authorized_keys
The admin user however will not work, since it conflicts with an existing linux group. You should pick a username that does not conflict with any of the name:x:123: names listed at getent group
I experience a strange issue where my data is in UTC and the data picker selects time by a large offset. I'd like the UI to select UTC time only.
In the documentation it states Superset is build to run on UTC time only. I also found some threads one can change this by setting the Linux environment via ENV variable to other timezone:
ENV TZ Europe/Amsterdam
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
Wrong behaviour (UTC is approx 17h December 22nd):
(same behaviour in all dashboards)
Query behind this data (I used "view query in SQLLab")
SELECT toDateTime(intDiv(toUInt32(toDateTime(delivery_date)), 300)*300) AS __timestamp,
COUNT(*) AS count
FROM spearad_data.a_performance_view
WHERE delivery_date >= toDateTime('2020-12-21 23:00:00')
AND delivery_date < toDateTime('2020-12-22 00:00:00')
GROUP BY toDateTime(intDiv(toUInt32(toDateTime(delivery_date)), 300)*300)
ORDER BY count DESC
LIMIT 10000;
In My case I did check the ECS docker container and the EC2 instance the task (container) runs on.
EC2 machine:
[ec_user#1.2.3.4]$date
Tue Dec 22 16:31:07 UTC 2020
[ec2-user#1.2.3.4]$ echo $TZ
[ec2-user#ip-1-2-3-4]$ date +'%:z %Z'
+00:00 UTC
[ec2-user#ip-1-2-3-4]$ cat /etc/timezone
cat: /etc/timezone: No such file or directory
[ec2-user#ip-1-2-3-4]$ cat /etc/timezone
cat: /etc/timezone: No such file or directory
[ec2-user#ip-1-2-3-4]$ timedatectl
Local time: Tue 2020-12-22 16:40:47 UTC
Universal time: Tue 2020-12-22 16:40:47 UTC
RTC time: Tue 2020-12-22 16:40:42
Time zone: n/a (UTC, +0000)
NTP enabled: yes
NTP synchronized: no
RTC in local TZ: no
DST active: n/a
ECS container:
/ # date
Tue Dec 22 16:43:53 UTC 2020
/ # echo $TZ
/ # date +'%:z %Z'
/ # cat /etc/timezone
cat: can't open '/etc/timezone': No such file or directory
Clickhouse DB:
SELECT now();
2020-12-22T16:50:32
Superset MySQL (AWS RDS):
SELECT now();
2020-12-22T16:52:27
https://time.is/de/UTC
16:57:01
Save a query in SQLLab
created_on
2020-12-22T16:59:03
Data is UTC based as well. So where do I need to change this? It seems there's another setting or configuration missing.
I have the following setup:
Apache Superset 0.37 on AWS ECS
Superset ConfigDB: AWS RDS
Fact DB: Clickhouse DB 20.7
Driver: clickhouse-sqlalchemy (native mode)
I have an ENI created, and I need to attach it as a secondary ENI to my EC2 instance dynamically using cloud formation. As I am using red hat AMI, I have to go ahead and manually configure RHEL which includes steps as mentioned in below post.
Manually Configuring secondary Elastic network interface on Red hat ami- 7.5
Can someone please tell me how to automate all of this using cloud formation. Is there a way to do all of it using user data in a cloud formation template? Also, I need to make sure that the configurations remain even if I reboot my ec2 instance (currently the configurations get deleted after reboot.)
Though it's not complete automation but you can do below to make sure that the ENI comes up after every reboot of your ec2 instance (only for RHEL instances). If anyone has any better suggestion, kindly share.
vi /etc/systemd/system/create.service
Add below content
[Unit]
Description=XYZ
After=network.target
[Service]
ExecStart=/usr/local/bin/my.sh
[Install]
WantedBy=multi-user.target
Change permissions and enable the service
chmod a+x /etc/systemd/system/create.service
systemctl enable /etc/systemd/system/create.service
Below shell script does the configuration on rhel for ENI
vi /usr/local/bin/my.sh
add below content
#!/bin/bash
my_eth1=`curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/0e:3f:96:77:bb:f8/local-ipv4s/`
echo "this is the value--" $my_eth1 "hoo"
GATEWAY=`ip route | awk '/default/ { print $3 }'`
printf "NETWORKING=yes\nNOZEROCONF=yes\nGATEWAYDEV=eth0\n" >/etc/sysconfig/network
printf "\nBOOTPROTO=dhcp\nDEVICE=eth1\nONBOOT=yes\nTYPE=Ethernet\nUSERCTL=no\n" >/etc/sysconfig/network-scripts/ifcfg-eth1
ifup eth1
ip route add default via $GATEWAY dev eth1 tab 2
ip rule add from $my_eth1/32 tab 2 priority 600
Start the service
systemctl start create.service
You can check if the script ran fine or not by --
journalctl -u create.service -b
Still need to figure out the joining of the secondary ENI from Linux, but this was the Python script I wrote to have the instance find the corresponding ENI and attach it to itself. Basically the script works by taking a predefined naming tag for both the ENI and Instance, then joins the two together.
Pre-reqs for setting this up are:
IAM role on the instance to allow access to S3 bucket where script is stored
Install pip and the AWS CLI in the user data section
curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install awscli --upgrade
aws configure set default.region YOUR_REGION_HERE
pip install boto3
sleep 180
Note on sleep 180 command: I have my ENI swap out on instance in an autoscaling group. This allows an extra 3 min for the other instance to shut down and drop the ENI, so the new one can pick it up. May or may not be necessary for your use case.
AWS CLI command in user data to download the file onto the instance (example below)
aws s3api get-object --bucket YOURBUCKETNAME --key NAMEOFOBJECT.py /home/ec2-user/NAMEOFOBJECT.py
# coding: utf-8
import boto3
import sys
import time
client = boto3.client('ec2')
# Get the ENI ID
eni = client.describe_network_interfaces(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the name of your ENI tag here']
},
]
)
eni_id = eni['NetworkInterfaces'][0]['NetworkInterfaceId']
# Get ENI status
eni_status = eni['NetworkInterfaces'][0]['Status']
print('Current Status: {}\n'.format(eni_status))
# Detach if in use
if eni_status == 'in-use':
eni_attach_id = eni['NetworkInterfaces'][0]['Attachment']['AttachmentId']
eni_detach = client.detach_network_interface(
AttachmentId=eni_attach_id,
DryRun=False,
Force=False
)
print(eni_detach)
# Wait until ENI is available
print('start\n-----')
while eni_status != 'available':
print('checking...')
eni_state = client.describe_network_interfaces(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the name of your ENI tag here']
},
]
)
eni_status = eni_state['NetworkInterfaces'][0]['Status']
print('ENI is currently: ' + eni_status + '\n')
if eni_status != 'available':
time.sleep(10)
print('end')
# Get the instance ID
instance = client.describe_instances(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the tag name of your instance here']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
)
instance_id = instance['Reservations'][0]['Instances'][0]['InstanceId']
# Attach the ENI
response = client.attach_network_interface(
DeviceIndex=1,
DryRun=False,
InstanceId=instance_id,
NetworkInterfaceId=eni_id
)
Below is what happened to one mail send from a drupal client.
$ grep 'B6693C0977' /var/log/maillog
Jan 19 14:12:30 instance-1 postfix/pickup[19329]: B6693C0977: uid=0 from=<admin#mailgun.domainA.com>
Jan 19 14:12:30 instance-1 postfix/cleanup[20035]: B6693C0977: message-id=<20170119141230.B6693C0977#mail.instance-1.c.tw-pilot.internal>
Jan 19 14:12:30 instance-1 postfix/qmgr[19330]: B6693C0977: from=<admin#mailgun.domainA.com>, size=5681, nrcpt=1 (queue active)
Jan 19 14:12:33 instance-1 postfix/smtp[20039]: B6693C0977:
to=<username#hotmail.com>, relay=smtp.mailgun.org[52.41.19.62]:2525, delay=2.4,
delays=0.02/0.05/1.8/0.53, dsn=5.7.1, status=bounced (host smtp.mailgun.org
[52.41.19.62] said: 550 5.7.1 **Relaying denied** (in reply to RCPT TO command))
Jan 19 14:12:33 instance-1 postfix/bounce[20050]: B6693C0977: sender non-delivery notification: ABB94C0976
Jan 19 14:12:33 instance-1 postfix/qmgr[19330]: B6693C0977: removed
Relevant excerpts from my /etc/postfix/main.cf are below
# RELAYHOST SETTINGS
smtp_tls_security_level = encrypt
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
sender_dependent_relayhost_maps = hash:/etc/postfix/relayhost_map
and from /etc/postfix/sasl_passwd is follows
#mailgun.domainA.com postmaster#mailgun.domainA.com:password
and from /etc/postfix/relayhost_map is follows
#mailgun.domainA.com [smtp.mailgun.org]:2525
The permissions of the db files are as follows
# ls -lZ /etc/postfix/relayhost_map.db
-rw-r-----. root postfix unconfined_u:object_r:postfix_etc_t:s0 /etc/postfix/relayhost_map.db
# ls -lZ /etc/postfix/sasl_passwd.db
-rw-r-----. root postfix unconfined_u:object_r:postfix_etc_t:s0 /etc/postfix/sasl_passwd.db
The problem is
Outbound mails are not going.
No logs are shown in mailgun console.
Any insight is appreciated
I know that this is an old question now but I've just had the same issue and wanted to post a response for anyone who comes across this article in future.
I believe your issue is in /etc/postfix/relayhost_map where you should have the following, note that there are no brackets, for me it was the inclusion of brackets that was causing the issue:
#mailgun.domainA.com smtp.mailgun.org:2525
For anyone who is not using /etc/postfix/relayhost_map and is doing it all in /etc/postfix/sasl_passwd directly the same applies:
smtp.mailgun.org:2525 postmaster#mailgun.domainA.com:password
Don't forget to regenerate the postfix sasl_passwd.db file and restart the service afterwards
sudo postmap /etc/postfix/sasl_passwd
sudo systemctl restart postfix
Or sudo service postfix restart if you're on an older system / not running systemd.
Usually this is realted to problems on their platform if everything was working ok previously just open a ticket and usually they fix it in a few hours (yes that its kind of hard a few hours)
Is there any way to detect, in general, if your code is executing on an Azure or Amazon virtual machine. I am not referring to some sort of web or worker role in particular, I mean given any executable, is there anything that resolves that machine to a cloud VM - for example under Azure there is no domain so i cannot simply rely on a domain name.
AWS
If your guest has networking up then you can probe the instance metadata by accessing http://169.254.169.254
For example:
$ curl http://169.254.169.254/1.0/meta-data/instance-id
i-87dc2f76
However, hitting the network is fairly heavy-weight.
On AWS you can also check by looking at dmidecode:
$ /usr/sbin/dmidecode -s bios-version | tr "[:upper:]" "[:lower:]" | grep -q "amazon"
dmidecode is light-weight since it only hits the memory of the guest OS. However, as pointed out in previous answers it is dependent on Amazon continuing to include the word "amazon" in their version strings.
Azure
On Azure, you can detect the hypervisor details but this doesn't allow you to discriminate between Azure and HyperV. Depending on your scenario this might not be necessary.
To detect Azure/HyperV using dmidecode check the following strings:
$ /usr/sbin/dmidecode -s system-manufacturer | tr "[:upper:]" "[:lower:]" | grep -q "microsoft corporation"
$ /usr/sbin/dmidecode -s system-product-name | tr "[:upper:]" "[:lower:]" | grep -q "virtual machine"
You could look at the machine's ip address, and determine if it is in a particular cloud's ip address block.
For Azure, the published list of ip address ranges for each subregion is an xml file at:
http://download.microsoft.com/download/E/F/C/EFC5E4D8-E4EA-4EFE-9356-D8AEEBC85F50/Azure_IP_Ranges.xml
Amazon will post a blog entry when they add new ranges. They are currently:
US East (Northern Virginia):
72.44.32.0/19 (72.44.32.0 - 72.44.63.255)
67.202.0.0/18 (67.202.0.0 - 67.202.63.255)
75.101.128.0/17 (75.101.128.0 - 75.101.255.255)
174.129.0.0/16 (174.129.0.0 - 174.129.255.255)
204.236.192.0/18 (204.236.192.0 - 204.236.255.255)
184.73.0.0/16 (184.73.0.0 – 184.73.255.255)
184.72.128.0/17 (184.72.128.0 - 184.72.255.255)
184.72.64.0/18 (184.72.64.0 - 184.72.127.255)
50.16.0.0/15 (50.16.0.0 - 50.17.255.255)
50.19.0.0/16 (50.19.0.0 - 50.19.255.255)
107.20.0.0/14 (107.20.0.0 - 107.23.255.255)
23.20.0.0/14 (23.20.0.0 – 23.23.255.255)
54.242.0.0/15 (54.242.0.0 – 54.243.255.255)
54.234.0.0/15 (54.234.0.0 – 54.235.255.255) NEW
54.236.0.0/15 (54.236.0.0 – 54.237.255.255) NEW
US West (Oregon):
50.112.0.0/16 (50.112.0.0 - 50.112.255.255)
54.245.0.0/16 (54.245.0.0 – 54.245.255.255)
US West (Northern California):
204.236.128.0/18 (204.236.128.0 - 204.236.191.255)
184.72.0.0/18 (184.72.0.0 – 184.72.63.255)
50.18.0.0/16 (50.18.0.0 - 50.18.255.255)
184.169.128.0/17 (184.169.128.0 - 184.169.255.255)
54.241.0.0/16 (54.241.0.0 – 54.241.255.255)
EU (Ireland):
79.125.0.0/17 (79.125.0.0 - 79.125.127.255)
46.51.128.0/18 (46.51.128.0 - 46.51.191.255)
46.51.192.0/20 (46.51.192.0 - 46.51.207.255)
46.137.0.0/17 (46.137.0.0 - 46.137.127.255)
46.137.128.0/18 (46.137.128.0 - 46.137.191.255)
176.34.128.0/17 (176.34.128.0 - 176.34.255.255)
176.34.64.0/18 (176.34.64.0 – 176.34.127.255)
54.247.0.0/16 (54.247.0.0 – 54.247.255.255)
54.246.0.0/16 (54.246.0.0 – 54.246.255.255) NEW
Asia Pacific (Singapore)
175.41.128.0/18 (175.41.128.0 - 175.41.191.255)
122.248.192.0/18 (122.248.192.0 - 122.248.255.255)
46.137.192.0/18 (46.137.192.0 - 46.137.255.255)
46.51.216.0/21 (46.51.216.0 - 46.51.223.255)
54.251.0.0/16 (54.251.0.0 – 54.251.255.255)
Asia Pacific (Tokyo)
175.41.192.0/18 (175.41.192.0 - 175.41.255.255)
46.51.224.0/19 (46.51.224.0 - 46.51.255.255)
176.32.64.0/19 (176.32.64.0 - 176.32.95.255)
103.4.8.0/21 (103.4.8.0 - 103.4.15.255)
176.34.0.0/18 (176.34.0.0 - 176.34.63.255)
54.248.0.0/15 (54.248.0.0 - 54.249.255.255)
South America (Sao Paulo)
177.71.128.0/17 (177.71.128.0 - 177.71.255.255)
54.232.0.0/16 (54.232.0.0 – 54.232.255.255) NEW
This is an internal way to check if you machine instance is in amazon or not:
dmidecode | grep Version
version: 4.2.amazon <--- This is what you would want to key on
This would only work as long as Amazon maintains their signature in their VM's virtual BIOS settings. I have not worked with Azure, but I am sure you could extrapolate information using dmidecode as well. I have done it with VMware, and VirtualBox.
#Dan's answer no longer works for Azure, use the following URL to get a better list
http://msdn.microsoft.com/en-us/library/windowsazure/dn175718.aspx
If this url ever disappears here is a copy from today (August 12th 2013)
Europe West
65.52.128.0/19
213.199.128.0/20
168.63.0.0/19
168.63.96.0/19
137.116.192.0/19
137.117.128.0/17
168.61.56.0/21
Europe North
65.52.64.0/20
65.52.224.0/19
168.63.92.0/22
168.63.32.0/19
94.245.88.0/21
94.245.104.0/21
168.63.64.0/20
168.63.80.0/20
168.61.96.0/19
137.116.224.0/20
US East
168.62.32.0/19
157.56.176.0/21
168.62.160.0/19
168.61.32.0/20
168.61.48.0/21
137.117.64.0/18
137.135.64.0/18
138.91.96.0/19
137.116.112.0/20
US West
168.62.192.0/20
168.62.208.0/21
168.61.0.0/20
168.61.64.0/20
137.117.0.0/19
137.135.0.0/18
137.116.184.0/21
138.91.64.0/19
65.52.112.0/20
168.63.89.0/24
157.56.160.0/21
168.62.0.0/19
US North Central
65.52.0.0/19
65.52.0.0/20
65.52.16.0/20
65.52.192.0/19
65.52.48.0/20
157.55.24.0/21
157.55.64.0/20
157.55.160.0/20
157.55.136.0/21
157.55.208.0/20
157.56.8.0/21
157.55.252.0/22
168.62.96.0/19
157.55.248.0/22
168.62.224.0/19
US South Central
157.55.176.10/22
157.55.183.223/27
157.55.184.10/22
157.55.191.223/27
157.55.192.10/24
157.55.193.223/27
157.55.194.10/24
157.55.195.223/27
157.55.196.10/23
157.55.200.10/23
157.55.80.10/23
157.55.83.223/27
157.55.84.10/23
157.55.87.223/27
65.52.32.10/22
65.52.39.224/28
70.37.160.10/22
70.37.167.224/28
70.37.118.0/24
70.37.119.138/28
70.37.119.170/28
70.37.48.10/22
70.37.55.224/28
70.37.56.10/22
70.37.63.224/28
70.37.116.0/24
SE Asia
111.221.96.0/20
168.63.160.0/19
111.221.80.0/20
168.63.224.0/19
137.116.128.0/19
East Asia
65.52.160.0/19
111.221.78.0/23
168.63.128.0/19
168.63.192.0/19
137.116.160.0/20
You can use a metadata endpoint for Azure VMs as well:
Invoke-RestMethod -Headers #{"Metadata"="true"} -Method GET -Proxy $Null -Uri "http://169.254.169.254/metadata/instance?api-version=2021-02-01" | ConvertTo-Json -Depth 64
Ref:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/instance-metadata-service?tabs=windows