Cannot set LinkLocalAddressBehavior to LinkLocalAlwaysOff - c++

I'm trying to set LinkLocalAddressBehavior for an interface to LinkLocalAlwaysOff by using SetIpInterfaceEntry function, but I'm always getting ERROR_INVALID_PARAMETER. When I set LinkLocalAddressBehavior to LinkLocalDelayed, SetIpInterfaceEntry executes without problem.
I haven't found anything that may help with this problem at MSDN (SetIpInterfaceEntry, MIB_IPINTERFACE_ROW or NL_LINK_LOCAL_ADDRESS_BEHAVIOR).
Any suggestions?
Thanks!
UPDATE: Code sample:
// Initialize MIB_IPINTERFACE_ROW with actual InterfaceLuid:
auto row = MIB_IPINTERFACE_ROW{ AF_INET, 1689399632855040 };
// GetIpInterfaceEntry succeeds
auto result = GetIpInterfaceEntry(&row);
// Setting the value:
row.LinkLocalAddressBehavior = LinkLocalAlwaysOff;
// SetIpInterfaceEntry fails with ERROR_INVALID_PARAMETER:
result = SetIpInterfaceEntry(&row);

According to this article:
The assignment of an IPv4 Link-Local address on an interface is based
solely on the state of the interface, and is independent of any other
protocols such as DHCP. A host MUST NOT alter its behavior and use of
other protocols such as DHCP because the host has assigned an IPv4
Link-Local address to an interface.
So we are not able to change its behavior, when it is enabled. the LinkLocalDelayed succeed because the original status was LinkLocalDelayed.
For IPv6, I found an answer on msdn. Seem like they have some similar behaviors. If one has been enable, then it will not be able to disabled.

Related

findModuleByPath("host") returns nullptr in OMNeT++

In OMNeT++ I'm working on example aloha. I try adding acknowledge message sent from server to the node. So, I have defined cModule *host and added host = findModuleByPath("host"); line to the initialize() method in Server.cc but it returns nullptr and I have seen the getModuleByPath() method also does the same work but throws and exception instead of returning a nullptr.
It cannot find the host module even though I have defined it. I believe I am missing something but I don't know what. Is there a good example of network (with multiple nodes) that also sends acknowledgement message?
There are several issues in using cModule *host = findModuleByPath("host") in initialize() of server.
According to 4.11.4 Finding Modules by Path that command leads to looking for a submodule named host inside the current module, i.e. in server. Of course, server does not contain host, so it results in returning nullptr. To find a sibling module called host one should use
cModule *host = findModuleByPath("^.host").
In Aloha there is no single host module, but a vector of modules. It means, that first host has the name host[0], the second - host[1] etc. Therefore, it would be possible to use:
cModule *host = findModuleByPath("^.host[2]")
Another way is the following command:
cModule *host = getParentModule()->getSubmodule("host", 2)
Be aware that initialize() is called sequentially in modules in the network and the order of choosing the next module is not guaranteed by the simulation environment, e.g. initialize() was called in host[1] but not yet in server.
Multi-Stage Initialization may be used to be sure that one stage of initialize() has been performed in all modules.

How to get your public ip address using Qt c++

I'm writing a gui for an net address etc. calculator. All the coding is done but now i want to have a button that will get your computer's ip address. I was looking for a solution and saw various posts on stackoverflow but none of them work for me...
Edit: this piece of code worked for me
QTcpSocket socket;
socket.connectToHost("8.8.8.8", 53);
if (socket.waitForConnected()) {
QString text = socket.localAddress().toString();
ui->ipAddress->setText(text);
} else {
QMessageBox msg;
msg.setText("Couldn't connect to the DNS server! No internet connection...");
msg.setWindowTitle("No internet connection");
msg.setIcon(QMessageBox::Critical);
msg.exec();
}```
I think the class you are looking for is the QNetworkInterface class.
From the man page:
The QNetworkInterface class provides a listing of the host's IP
addresses and network interfaces.
For example, calling QNetworkInterface::allAddresses() is a quick way to get a list of all the IP addresses on your machine. As for which one is the "public IP address", that's not a well-defined concept. On most modern consumer setups, it's arguable that there is no public IP address, as consumer PCs are typically hidden behind a NAT layer, and as such the only public IP address is on the network router, not on the user's computer itself.
You can use QHostInfo for this purpose. Ex;
auto list = QHostInfo::fromName(QHostInfo::localHostName()).addresses();
I am using following code snippet in my current project to get ip address. And it is working fine both on Desktop Linux and Embedded Linux. device is network interface name like "wlan0", "eth0" etc. It returns QNetworkAddressEntry which contains both ip and netmask. Use ip() function get ip address. Usually the first address entry is non-virtual one, so that's why am getting the first one.
const QNetworkInterface& networkInterface = QNetworkInterface::interfaceFromName(device);
if (networkInterface.isValid())
{
const QList<QNetworkAddressEntry>& addressEntries = networkInterface.addressEntries();
if (!addressEntries.isEmpty())
return addressEntries.front();
}
return QNetworkAddressEntry(); // could not found, invalid adress entry
Don't forget to add QT += network to your pro file.
Using QNetworkAccessManager, you can tap this free REST api: https://www.ipify.org/
That supports various options, such as returning the results in json. If you simply get that url, it will respond with the ip address you are most likely hoping to get back, in a raw format (i.e. a naked string).

What's the correct way to verify an SSL certificate in Win32?

I want to verify an SSL certificate in Win32 using C++. I think I want to use the Cert* API so that I can get the benefit of the Windows certificate store. This is what I've come up with.
Is it correct?
Is there a better way to do this?
Am I doing anything wrong?
bool IsValidSSLCertificate( PCCERT_CONTEXT certificate, LPWSTR serverName )
{
LPTSTR usages[] = { szOID_PKIX_KP_SERVER_AUTH };
CERT_CHAIN_PARA params = { sizeof( params ) };
params.RequestedUsage.dwType = USAGE_MATCH_TYPE_AND;
params.RequestedUsage.Usage.cUsageIdentifier = _countof( usages );
params.RequestedUsage.Usage.rgpszUsageIdentifier = usages;
PCCERT_CHAIN_CONTEXT chainContext = 0;
if ( !CertGetCertificateChain( NULL,
certificate,
NULL,
NULL,
&params,
CERT_CHAIN_REVOCATION_CHECK_CHAIN,
NULL,
&chainContext ) )
{
return false;
}
SSL_EXTRA_CERT_CHAIN_POLICY_PARA sslPolicy = { sizeof( sslPolicy ) };
sslPolicy.dwAuthType = AUTHTYPE_SERVER;
sslPolicy.pwszServerName = serverName;
CERT_CHAIN_POLICY_PARA policy = { sizeof( policy ) };
policy.pvExtraPolicyPara = &sslPolicy;
CERT_CHAIN_POLICY_STATUS status = { sizeof( status ) };
BOOL verified = CertVerifyCertificateChainPolicy( CERT_CHAIN_POLICY_SSL,
chainContext,
&policy,
&status );
CertFreeCertificateChain( chainContext );
return verified && status.dwError == 0;
}
You should be aware of RFC3280 section 6.1 and RFC5280 section 6.1. Both describe algorithms for validating certificate paths. Even though Win32 API takes care of some things for you, it could still be valuable to know about the process in general.
Also, here’s a (in my opinion) pretty trustworthy reference: Chromium certificate verification code.
Overall, I think your code isn't incorrect. But here’s a few things I’d look into/change, if I were you:
1. Separate Common Name Validation
Chromium validates certificate common name separately from the chain. Apparently they've noticed some problems with it. See the comments for their rationale:
cert_verify_proc.win.cc:731 // Certificate name validation happens separately, later, using an internal
cert_verify_proc.win.cc:732 // routine that has better support for RFC 6125 name matching.
2. Use CERT_CHAIN_REVOCATION_CHECK_CHAIN_EXCLUDE_ROOT
Chromium also uses the CERT_CHAIN_REVOCATION_CHECK_CHAIN_EXCLUDE_ROOT flag instead of CERT_CHAIN_REVOCATION_CHECK_CHAIN. I actually started to looking into this before I found their code, and it reinforced my belief that you should use CERT_CHAIN_REVOCATION_CHECK_CHAIN_EXCLUDE_ROOT.
Even though both aforementioned RFCs specify that a self-signed trust anchor is not considered part of a chain, the documentation for CertGetCertificateChain (http://msdn.microsoft.com/en-us/library/windows/desktop/aa376078(v=vs.85).aspx) says it builds a chain up to, if possible, a trusted root certificate. A trusted root certificate is defined (on the same page) as a trusted self-signed certificate.
This eliminates the possibility that *EXCLUDE_ROOT might skip revocation checking for a non-root trust anchor (Win32 actually requires trust-anchors to be self-signed, even though it is not required by any RFCs. Though this is not officially documented).
Now, since a root CA certificate can not revoke itself (the CRL could not be signed/verified), it seems to me that these two flags are identical.
I did some googling and stumbled across this forum post: http://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/9f95882a-1a68-477a-80ee-0a7e3c7ae5cf/x509revocationflag-question?forum=windowssecurity. A member of .NET Product Group (supposedly) claims that the flags in practice act the same, if the root is self-signed (in theory, the ENTIRE_CHAIN flag would check the root certificate for revocation if it included a CDP extension, but that can’t happen).
He also recommends to use the *EXCLUDE_ROOT flag, because the other flag could cause an unnecessary network request, if the self-signed root CA includes the CDP extension.
Unfortunately:
I can’t find any officially documented explanation on the differences between the two flags.
Even though it is likely that the linked discussion applies to the same Win32 API flags under the hood of .NET, it is not guaranteed.
To be completely sure that it’s ok to use CERT_CHAIN_REVOCATION_CHECK_CHAIN_EXCLUDE_ROOT, I googled a bit more and found the Chromium SSL certificate verification code I linked to at the top of my reply.
As an added bonus, the Chromium cert_verify_proc_win.cc file contains the following hints about IE verification code:
618: // IE passes a non-NULL pTime argument that specifies the current system
619: // time. IE passes CERT_CHAIN_REVOCATION_CHECK_CHAIN_EXCLUDE_ROOT as the
620: // chain_flags argument.
Not sure how they’d know this, but at this point I’d feel comfortable using CERT_CHAIN_REVOCATION_CHECK_EXCLUDE_ROOT.
3. Different Accepted Certificate Usages
I noticed Chromium also specifies 3 certificate usages instead of 1:
szOID_PKIX_KP_SERVER_AUTH,
szOID_SERVER_GATED_CRYPTO,
szOID_SGC_NETSCAPE
From what I can gather through Google, the other usages can be required by older web browsers, otherwise they can fail to establish a secure connection.
If Chromium deems fit to include these usages, I'd follow suit.
Note that if you change your code, you should also set params.RequestedUsage.dwType to USAGE_MATCH_TYPE_OR instead of USAGE_MATCH_TYPE_AND.
—
I can’t think of any other comments at the moment. But if I were you, I’d check out Chromium source myself (and maybe Firefox too) - just to be sure I haven’t missed anything.
I think the best answer depends on what exactly you are attempting to do.
I will caution you that SSL is based on the assumption that Both endpoints want a secure connection. If either endpoint isn't interested in maintaining security then there is none.
Its a trivial effort to put byte codes in your distributed code that simply returns true for this function. That's why windows moved a lot of validation into the kernel. But they didn't anticipate people running windows on virtual hardware, which makes circumventing the OS just about as trivial.
Now consider that you expect to be provided a cert from some source, but pretending that that source couldn't be provided the same information from a reliable source. And then hand it to you. So You cannot rely on certificates to "prove" anyone is anyone in particular.
The only protection gained from certificates are in preventing outsiders, not endpoints, from breaching the confidentiality of the message being transported.
Any other use is doomed to fail, and it will fail eventually with potentially catastrophic results.
Sorry for the big post. The comment section has a word limit.
The functions CertGetCertificateChain and CertVerifyCertificatePolicy go together. This part is correct.
For CertGetCertificateChain the flag can be set to any of the following three if you want to check for revocation:
CERT_CHAIN_REVOCATION_CHECK_END_CERT
CERT_CHAIN_REVOCATION_CHECK_CHAIN
CERT_CHAIN_REVOCATION_CHECK_CHAIN_EXCLUDE_ROOT.
Only one of them can be used, these three options cannot be ORed. Beside one of these flags you can consider how the chain should be created; using local cache or just CRL or OCSP. For these considerations read this link.
Error in executing the function or more simply if the return value is 0, it does not mean the certificate is invalid, rather you were unable to perform the operation. For error information use GetLastError(). So your logic of returning false is wrong, it is more of a case of throwing the error and let the client code decide whether to try again or go on to do other stuff.
In this link there is a section called "classify the error", please read that. Basically you should check certChainContext->TrustStatus.dwErrorStatus. Here a list of error statuses will be ORed. Please check CERT_TRUST_STATUS msdn reference. So here you can have your business logic. For example, if you find the error status of the value (CERT_TRUST_REVOCATION_STATUS_UNKNOWN | CERT_TRUST_IS_OFFLINE_REVOCATION) that certificate revocation check could not be performed, you have the option to decide what you want (let the cert go or still mark it as invalid).
So, before going to call CertVerifyCertificatePolicy you have the option to discard or already flag a validation error.
If you choose to come to CertVerifyCertificatePolicy, the chromium code is a wonderful reference regarding how to map policy_status.dwError to your error class/enum.

Determining the network connection link speed

How do I programmatically determine the network connection link speed for an active network connection - like Task Manager shows you in the Networking tab? I'm not really after the bandwidth available, just a figure for the current connection, e.g. 54Mbps, 100Mbps etc.
Win32_NetworkAdapter WMI class can help you (Speed property). It returns the value 54000000 for my WiFi adapter connected to a WiFi-g access point.
In the end I found the Win32_PerfRawData_Tcpip_NetworkInterface WMI class, as I need to support legacy platforms which, unfortunately, the Win32_NetworkAdapter doesn't do. Win32_PerfRawData_Tcpip_NetworkInterface has a CurrentBandwidth property which gives me what I need on all required platforms (I realise I said I didn't need "bandwidth" but its acceptable and appears to return the "nominal bandwidth" of the adapter anyway).
Thanks to all those who posted, pointing me in the right direction.
.NET way how to know adapter speed is
IPGlobalProperties computerProperties = IPGlobalProperties.GetIPGlobalProperties();
NetworkInterface[] nics = NetworkInterface.GetAllNetworkInterfaces();
if ( nics != null )
for (int i = 0; i < nics.Length; i++)
Console.WriteLine("Adapter '{0}' speed : {1}", nics[i].Name, nics[i].Speed);
Some adapters are tunnels, so their speed will be returned as 0.
Read NetworkInterface documentation on the MSDN for more information.

Get the domain name of a computer from Windows API

In my application I need to know if the computer is the primary domain controller of a domain, so I need to know the domain of the computer to call NetGetDCName function.
Thanks.
EDIT: The problem is related with the DCOM authentication so I need to know the domain to use the DOMAIN\USERNAME in case of a PDC or COMPUTER\USERNAME if I need to use the local authentication database of the computer.
The NetWkstaGetInfo() function returns either the domain name or the workgroup of the computer, and is therefore not a reliable way to determine if the computer is a member of a domain.
The GetComputerNameEx() function will help, used with the ComputerNameDnsDomain parameter, as shown below. This will return an empty string if the computer is in a workgroup, or the DNS name of the domain:
DWORD bufSize = MAX_PATH;
TCHAR domainNameBuf[ MAX_PATH ];
GetComputerNameEx( ComputerNameDnsDomain, domainNameBuf, &bufSize );
I would consider using NetWkstaGetInfo() and pass the local computer name is that first parameter.
#include <Lmwksta.h>
#include <StrSafe.h>
WCHAR domain_name[256];
WKSTA_INFO_100 info = {0};
if (NERR_Success == NetWkstaGetInfo(L"THIS-COMPUTER", 100, &info) &&
SUCCEEDED(StringCchCopy(domain_name, ARRAYSIZE(domain_name), info.wki100_langroup))) {
// use domain_name here...
}
You can use the NetWkstaGetInfo Function do this.
If you pass in null for the computer name it returns info about the local computer.
It will return a WKSTA_INFO_100 instance, which includes the domain name.
If you just want to know if the machine the code is running is the primary domain controller I think your best option is NetServerGetInfo. If you pass 101 as the level parameter it returns an SERVER_INFO_101 structure. Then look for its sv101_type member:
sv101_type
The type of software the computer is running. This member can be one of the following values.
(...)
SV_TYPE_DOMAIN_CTRL: A primary domain controller.
There is a specific function to determine the join status of a Workstation.
https://learn.microsoft.com/en-gb/windows/desktop/api/lmjoin/nf-lmjoin-netgetjoininformation
THere can be 3 statussus, 'Joined' to a workgroup (The good old Windows 3.0 networking) status == NetSetupWorkgroupName, or joined to a domain status == NetSetupDomainName
or unjoined (standalone) status == NetSetupUnjoined
So if you know this, you can call the respective required functions in a reliable way.
Finally I have used this code. It works in local machine, executed remotely nStatus gives a ACCESS_DENIED error.
NET_API_STATUS nStatus;
TOleString oleServerName=strServerName.c_str();
DWORD dwLevel=101;
LPSERVER_INFO_101 pBufServer=NULL;
LPWKSTA_INFO_100 pBufWksta = NULL;
nStatus=NetServerGetInfo(oleServerName, dwLevel,
(LPBYTE*)&pBufServer);
if(nStatus==NERR_Success &&
(pBufServer->sv101_type & SV_TYPE_DOMAIN_CTRL))
{
dwLevel=100;
nStatus=NetWkstaGetInfo(oleServerName, 100,
(LPBYTE*)&pBufWksta);
if(nStatus==NERR_Success)
{
AnsiString strDomain(pBufWksta->wki100_langroup);
m_pgServerConnection->SetDomain(strDomain);
}
}
Thanks to all :)