After changing a key value from a machine2, Not getting the changed value from Machine1 - appfabric

I did a sample application & run the application from 2 different machines where both application is using AppFabric cache. I set the pollInterval="120" secs in both applications config file with below settings:
<localCache isEnabled="true"
sync="NotificationBased"
ttlValue="300"
objectCount="10"/>
<!--(optional) specify cache notifications poll interval-->
<clientNotification pollInterval="120" />
Also enabled Notification in cluster using powershell.
Now from Machine1 I read the key called key1 whose value is "Value1".
then from Machine2 I changed the value of key1 to "Changed".
then from Machine2 I read the key called key1 whose value is now displayed as "Changed".
then after the poll interval period which is 2 mnts I read the key called key1 from Machine1, whose value is now displayed still as "Value1". Why it's not displaying "Changed".
Why the change is not detected by the application in Machine1? Why the local cache invalidation not occurring?
At Ahmed Ilyas:>
show the code you are using to read and write to the cache. you also have not explained how you configured AppFabric and these machines. are they joined to the cluster?
I have done reading through AFC Read-Through API. which is done in separate project. Write to cache is done just by Put() method. As this is a sample project, so I though no need to update the database only update at cache cluster.
The above config settings for each application running in 2 machines.
I have allowed access for these 2 machines by granting access to them in cache cluster. 1 machine is both AFC server & cache client(i.e Machine1).
Hope this helps you to answer. Find the code below:
public class CacheUtil
{
private static DataCacheFactory _factory = null;
private static DataCache _cache = null;
static CacheUtil()
{
if (_cache == null)
{
// Declare array for cache host(s).
DataCacheServerEndpoint[] servers = new DataCacheServerEndpoint[1];
servers[0] = new DataCacheServerEndpoint("H1011.hoboo.net", 22233);
// Set the local cache properties. In this example, it
// is timeout-based with a timeout of 300 seconds(5mnts).
DataCacheLocalCacheProperties localCacheConfig;
TimeSpan localTimeout = new TimeSpan(0, 5, 0);
localCacheConfig = new DataCacheLocalCacheProperties(60, localTimeout, DataCacheLocalCacheInvalidationPolicy.TimeoutBased);
// Setup the DataCacheFactory configuration.
DataCacheFactoryConfiguration factoryConfig = new DataCacheFactoryConfiguration();
//factoryConfig.ChannelOpenTimeout = new TimeSpan(0, 0, 0);
//factoryConfig.Servers = servers;
//factoryConfig.LocalCacheProperties = localCacheConfig;
_factory = new DataCacheFactory();
//_factory = new DataCacheFactory(factoryConfig);
_cache = _factory.GetCache("default");
}
}
public static DataCache GetCache()
{
if (_cache != null) return _cache;
try
{
RuntimeContext.WriteAppFabricErrorLog(new AppFabricLogger()
{
CacheKey = "Connected to AppFabric Cache Server.",
CacheData = "Connected to AppFabric Cache Server.",
ErrorString = "Connected to AppFabric Cache Server."
});
}
catch (Exception ex)
{
//Suppress Error
}
return _cache;
}
}
Other class which has Get():>
public static object Get(string pName)
{
object cachedItem = null;
try
{
//Check configuration settings for AppFabric.
bool appFabricCache;
bool.TryParse(System.Configuration.ConfigurationManager.AppSettings["AppFabricCache"], out appFabricCache);
if (appFabricCache)
{
//Get data from AppFabric Cache Server.
cachedItem = CacheUtil.GetCache().Get(pName);
}
else
{
//Get data from Local Cache Server.
cachedItem = RuntimeContextOlderVersion.Get(pName);
}
}
catch (Exception Ex)
{
//If failes, write reason to log file.
WriteAppFabricErrorLog(new AppFabricLogger()
{
CacheKey = pName,
CacheData = "Get Method",
ErrorString = Ex.ToString()
});
}
return cachedItem;
}
#stuartd
Yes I have enabled notifictions. You can see that in my appconfig.
For Staurt:>
PS C:\Windows\system32> get-cacheconfig
cmdlet Get-CacheConfig at command pipeline position 1
Supply values for the following parameters:
CacheName: default
CacheName : default
TimeToLive : 10 mins
CacheType : Partitioned
Secondaries : 0
MinSecondaries : 0
IsExpirable : True
EvictionType : LRU
NotificationsEnabled : True
WriteBehindEnabled : False
WriteBehindInterval : 300
WriteBehindRetryInterval : 60
WriteBehindRetryCount : -1
ReadThroughEnabled : True
ProviderType : SampleProvider.Provider,SampleProvider, Version=1.0.
0.0, Culture=neutral, PublicKeyToken=cde85af3c5f6411
e
ProviderSettings : {"DBConnection"="Database=Test123;Server=..**.
**;uid=****;pwd=*****;connection timeout=5000"}

Related

External offset store with the debezium embedded connector

My team is building a CDC service with the Debezium embedded connector. For the offset storage we're thinking about using S3/DynamoDB. Just wondering if anyone here has written something similar to externalize the offset store and what they chose and why they chose that.
We have a Postgres DB as source. Change Data Capture (CDC) is implemented by the Postgres itself (done by the extension pglogical). This CDC subsystem of Postgres is responsible for offset management. The CDC subsytem will maintain a list of CDC clients (aka slots). So if your client creates a CDC connection the DB will start from the point where that client disconnected before (on the same slot). A new client will create a new slot and start receiving only the CDC records created from that point in time on. So there is no need for us to remember the offsets.
Had this challenge recently. You can write a custom class that implements org.apache.kafka.connect.storage.FileOffsetBackingStore or extend org.apache.kafka.connect.storage.MemoryOffsetBackingStore.
Subsequently ensure "offset.storage" config is set to the fully-qualified class name
Please see a sample below using redis (maybe not in production) as a backing store to give you an idea how this can work.
package com.sample.cdc.offsetbackingstore
import com.sample.cdc.service.RedisManager
import org.apache.kafka.connect.errors.ConnectException
import org.apache.kafka.connect.runtime.WorkerConfig
import org.apache.kafka.connect.storage.MemoryOffsetBackingStore
import java.io.IOException
import java.nio.ByteBuffer
import java.util.concurrent.Callable
import java.util.concurrent.Future
class RedisOffsetBackingStore : MemoryOffsetBackingStore() {
lateinit var redisManager : RedisManager
lateinit var redisHost : String
lateinit var redisPort : String
override fun configure(config: WorkerConfig?) {
super.configure(config)
redisHost = config?.getString("custom.config.redis.host")
redisPort = config?.getString("custom.config.redis.port")
}
// Called by Debezium Engine at some point
override fun start() {
super.start()
println("Initializing redis manager...")
redisManager = RedisManager(redisHost, redisPort)
}
// Called by Debezium Engine during graceful shutdown
override fun stop() {
super.stop()
println("Disposing redis client resources...")
if(this::redisManager.isInitialized)
redisManager.dispose()
}
// Called by DebeziumEngine OffsetReader to read Offset
override fun get(keys: MutableCollection<ByteBuffer>?): Future<MutableMap<ByteBuffer, ByteBuffer?>> {
if(data.isNotEmpty())
return super.get(keys)
return executor.submit(Callable<MutableMap<ByteBuffer, ByteBuffer?>> {
val result: MutableMap<ByteBuffer, ByteBuffer?> = HashMap()
keys?.forEach {
val offsetKey = String(it.array())
val offsetValue = redisManager.get(offsetKey)
if(offsetValue.isNotEmpty()){
val buffer = ByteBuffer.wrap(offsetValue.toByteArray())
result[it] = buffer
data[it] = buffer
}
}
result
})
}
// Invoked by set() in MemoryOffsetBackingStore class to persist Offset
// during commit or graceful shutdown
override fun save() {
try {
for ((key, value) in data) {
val offsetKey = String(key!!.array())
val offsetValue = String(value!!.array())
redisManager.save(offsetKey, offsetValue)
}
} catch (e: IOException) {
throw ConnectException(e)
}
}
}
//Ensure the below config setting is set in DebeziumConfig
//"offset.storage":"com.sample.cdc.offsetbackingstore.RedisOffsetBackingStore",
//"custom.config.redis.host": "localhost"
//"custom.config.redis.port": "6379"
Note: In case of multiple standalone embedded debezium services (for reliabilty and fault tolerance) with a custom offset backing store, you'll have to provide a way to handle offset race condition, and event deduplication.

What happens internally when an akka.conf file is read?

I am using OpenDaylight and trying to replace the default distributed database with Apache Ignite.
I am using the jar obtained by using the source code here:
https://github.com/Romeh/akka-persistance-ignite and deployed it in OpenDaylight karaf container.
The following is a portion of the akka.conf file i am using in OpenDaylight to replace the LevelDB journal with Apache Ignite.
odl-cluster-data {
akka {
loglevel = DEBUG
actor {
provider = "akka.cluster.ClusterActorRefProvider"
default-dispatcher {
# Configuration for the fork join pool
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 2
# Parallelism (threads) ... ceil(available processors * factor)
parallelism-factor = 2.0
# Max number of threads to cap factor-based parallelism number to
parallelism-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 10
}
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "10.145.59.44"
port = 2551
}
}
cluster {
seed-nodes = [
"akka.tcp://test#127.0.0.1:2551"
]
min-nr-of-members = 1
auto-down-unreachable-after = 30s
}
# Disable legacy metrics in akka-cluster.
akka.cluster.metrics.enabled=off
akka.persistence.journal.plugin = "akka.persistence.journal.ignite"
akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot.ignite"
extensions = ["akka.persistence.ignite.extension.IgniteExtensionProvider"]
persistence {
# Ignite journal plugin
journal {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.journal.IgniteWriteJournal"
plugin-dispatcher = "ignite-dispatcher"
cache-prefix = "akka-journal"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where journal cache is already created
cachesAlreadyCreated = false
}
}
# Ignite snapshot plugin
snapshot {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.snapshot.IgniteSnapshotStore"
plugin-dispatcher = "ignite-dispatcher"
cache-prefix = "akka-snapshot"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where snapshot cache is already created
cachesAlreadyCreated = false
}
}
}
}
}
However, the class IgniteWriteJournal does not seem to load which i have checked by putting some print statements in its constuructor as follows.
public IgniteWriteJournal(Config config) throws NotSerializableException {
System.out.println("!##$% inside IgniteWriteJournal constructor\n");
ActorSystem actorSystem = context().system();
serializer = SerializationExtension.get(actorSystem).serializerFor(PersistentRepr.class);
storage = new Store<>(actorSystem);
JournalCaches journalCaches = journalCacheProvider.apply(config, actorSystem);
sequenceNumberTrack = journalCaches.getSequenceCache();
cache = journalCaches.getJournalCache();
}
So what exactly happens to the class that is mentioned in the akka.persistence.journal.ignite tag? Does the constructor of that class get called? What exactly happens in the background when the akka.conf file is read?
Where are looking for the print outs - in data/log/karaf.log? System.out.println doesn't go there - use an org.slf4j.Logger.
How did you rebuild the IgniteWriteJournal source and deploy the new artifact? Are you sure your changes were actually deployed?

Google Application Credentials set and not found

I have an Amazon EC2 with Linux Instance set up and running for my Java Web Application to consume REST requests. The problem is that I am trying to use Google Cloud Vision in this application to recognize violence/nudity in users pictures.
Accessing the EC2 in my Terminal, I set the GOOGLE_APPLICATION_CREDENTIALS by the following command, which I found in the documentation:
export GOOGLE_APPLICATION_CREDENTIALS=<my_json_path.json>
Here comes my first problem: When I restart my server, and ran 'echo $GOOGLE_APPLICATION_CREDENTIALS' the variable is gone. Ok, I set it to the bash_profile and bashrc and now it is ok.
But, when I ran my application, consuming the above code, to get the adult and violence status of my picture, I got the following error:
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
My code is the following:
Controller:
if(SafeSearchDetection.isSafe(user.getId())) {
if(UserDB.updateUserProfile(user)==false){
throw new SQLException("Failed to Update");
}
} else {
throw new IOException("Explicit Content");
}
SafeSearchDetection.isSafe(int idUser):
String path = IMAGES_PATH + idUser + ".jpg";
try {
mAdultMedicalViolence = detectSafeSearch(path);
if(mAdultMedicalViolence.get(0) > 3)
return false;
else if(mAdultMedicalViolence.get(1) > 3)
return false;
else if(mAdultMedicalViolence.get(2) > 3)
return false;
} catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
return true;
detectSafeSearch(String path):
List<AnnotateImageRequest> requests = new ArrayList<AnnotateImageRequest>();
ArrayList<Integer> adultMedicalViolence = new ArrayList<Integer>();
ByteString imgBytes = ByteString.readFrom(new FileInputStream(path));
Image img = Image.newBuilder().setContent(imgBytes).build();
Feature feat = Feature.newBuilder().setType(Type.SAFE_SEARCH_DETECTION).build();
AnnotateImageRequest request = AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
requests.add(request);
ImageAnnotatorClient client = ImageAnnotatorClient.create();
BatchAnnotateImagesResponse response = client.batchAnnotateImages(requests);
List<AnnotateImageResponse> responses = response.getResponsesList();
for (AnnotateImageResponse res : responses) {
if (res.hasError()) {
System.out.println("Error: "+res.getError().getMessage()+"\n");
return null;
}
SafeSearchAnnotation annotation = res.getSafeSearchAnnotation();
adultMedicalViolence.add(annotation.getAdultValue());
adultMedicalViolence.add(annotation.getMedicalValue());
adultMedicalViolence.add(annotation.getViolenceValue());
}
for(int content : adultMedicalViolence)
System.out.println(content + "\n");
return adultMedicalViolence;
My REST application was built above a Tomcat8. After no success running the command:
System.getenv("GOOGLE_APPLICATION_CREDENTIALS")
I realize that my problem was in the Environment Variables to Tomcat installation. To correct this, I just created a new file setenv.sh in my /bin with the content:
GOOGLE_APPLICATION_CREDENTIALS=<my_json_path.json>
And it worked!

Using the Reporting Services Web Service, how do you get the permissions of a particular user?

Using the SQL Server Reporting Services Web Service, how can I determine the permissions of a particular domain user for a particular report? The user in question is not the user that is accessing the Web Service.
I am accessing the Web Service using a domain service account (lets say MYDOMAIN\SSRSAdmin) that has full permissions in SSRS. I would like to programmatically find the permissions of a domain user (lets say MYDOMAIN\JimBob) for a particular report.
The GetPermissions() method on the Web Service will return a list of permissions that the current user has (MYDOMAIN\SSRSAdmin), but that is not what I'm looking for. How can I get this same list of permissions for MYDOMAIN\JimBob? I will not have the user's domain password, so using their credentials to call the GetPermissions() method is not an option. I am however accessing this from an account that has full permissions, so I would think that theoretically the information should be available to it.
SSRS gets the NT groups from the users' NT login token. This is why when you are added to a new group, you are expected to log out and back in. The same applies to most Windows checks (SQL Server, shares, NTFS etc).
If you know the NT group(s)...
You can query the ReportServer database directly. I've lifted this almost directly out of one of our reports which we use to check folder security (C.Type = 1). Filter on U.UserName.
SELECT
R.RoleName,
U.UserName,
C.Path
FROM
ReportServer.dbo.Catalog C WITH (NOLOCK) --Parent
JOIN
ReportServer.dbo.Policies P WITH (NOLOCK) ON C.PolicyID = P.PolicyID
JOIN
ReportServer.dbo.PolicyUserRole PUR WITH (NOLOCK) ON P.PolicyID = PUR.PolicyID
JOIN
ReportServer.dbo.Users U WITH (NOLOCK) ON PUR.UserID = U.UserID
JOIN
ReportServer.dbo.Roles R WITH (NOLOCK) ON PUR.RoleID = R.RoleID
WHERE
C.Type = 1
look into "GetPolicies Method" you can see at the following link.
http://msdn.microsoft.com/en-us/library/reportservice2010.reportingservice2010.getpolicies.aspx
Hopefully this will get you started. I use it when copying Folder structure, and Reports from an old server to a new server when I want to 'migrate' my SSRS items from the Source to the Destination Server. It is a a Method to Get the Security Policies for an item on one server, and then set the Security Policies for an identical item on another server, after I have copied the item from the Source Server to the Destination Server. You have to set your own Source and Destination Server Names.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Web.Services.Protocols; //<=== required for SoapException
namespace SSRS_WebServices_Utility
{
internal static class TEST
{
internal static void GetPoliciesForAnItem_from_Source_ThenSetThePolicyForTheItem_on_Destination(string itemPath)
{
string sSourceServer = "SOURCE-ServerName";
Source_ReportService2010.ReportingService2010 sourceRS = new Source_ReportService2010.ReportingService2010();
sourceRS.Credentials = System.Net.CredentialCache.DefaultCredentials;
sourceRS.Url = #"http://" + sSourceServer + "/reportserver/reportservice2010.asmx";
string sDestinationServer = "DESTINATION-ServerName";
Destination_ReportService2010.ReportingService2010 DestinationRS = new Destination_ReportService2010.ReportingService2010();
DestinationRS.Credentials = System.Net.CredentialCache.DefaultCredentials;
DestinationRS.Url = #"http://" + sDestinationServer + "/reportserver/reportservice2010.asmx";
Boolean val = true;
Source_ReportService2010.Policy[] curPolicy = null;
Destination_ReportService2010.Policy[] newPolicy = null;
try
{
curPolicy = new Source_ReportService2010.Policy[1];
curPolicy = sourceRS.GetPolicies(itemPath, out val); //e.g. of itemPath: "/B2W/001_OLD_PuertoRicoReport"
//DestinationRS.SetPolicies(itemPath, newPolicy);
int iCounter = 0;
//int iMax = curPolicy.Length;
newPolicy = new Destination_ReportService2010.Policy[curPolicy.Length];
foreach (Source_ReportService2010.Policy p in curPolicy)
{
//create the Policy
Destination_ReportService2010.Policy pNew = new Destination_ReportService2010.Policy();
pNew.GroupUserName = p.GroupUserName;
pNew.GroupUserName = p.GroupUserName;
Destination_ReportService2010.Role rNew = new Destination_ReportService2010.Role();
rNew.Description = p.Roles[0].Description;
rNew.Name = p.Roles[0].Name;
//create the Role, which is part of the Policy
pNew.Roles = new Destination_ReportService2010.Role[1];
pNew.Roles[0]=rNew;
newPolicy[iCounter] = pNew;
iCounter += 1;
}
DestinationRS.SetPolicies(itemPath, newPolicy);
Debug.Print("whatever");
}
catch (SoapException ex)
{
Debug.Print("SoapException: " + ex.Message);
}
catch (Exception Ex)
{
Debug.Print("NON-SoapException: " + Ex.Message);
}
finally
{
if (sourceRS != null)
sourceRS.Dispose();
if (DestinationRS != null)
DestinationRS.Dispose();
}
}
}
}
To invoke it use the following:
TEST.GetPoliciesForAnItem_from_Source_ThenSetThePolicyForTheItem_on_Destination("/FolderName/ReportName");
Where you have to put your own SSRS Folder Name and Report Name, i.e. the Path to the item.
In fact I use a method that loops through all the items in the Destination folder that then calls the method like this:
internal static void CopyTheSecurityPolicyFromSourceToDestinationForAllItems_2010()
{
string sDestinationServer = "DESTINATION-ServerName";
Destination_ReportService2010.ReportingService2010 DestinationRS = new Destination_ReportService2010.ReportingService2010();
DestinationRS.Credentials = System.Net.CredentialCache.DefaultCredentials;
DestinationRS.Url = #"http://" + sDestinationServer + "/reportserver/reportservice2010.asmx";
// Return a list of catalog items in the report server database
Destination_ReportService2010.CatalogItem[] items = DestinationRS.ListChildren("/", true);
// For each FOLDER, debug Print some properties
foreach (Destination_ReportService2010.CatalogItem ci in items)
{
{
Debug.Print("START----------------------------------------------------");
Debug.Print("Object Name: " + ci.Name);
Debug.Print("Object Type: " + ci.TypeName);
Debug.Print("Object Path: " + ci.Path);
Debug.Print("Object Description: " + ci.Description);
Debug.Print("Object ID: " + ci.ID);
Debug.Print("END----------------------------------------------------");
try
{
GetPoliciesForAnItem_from_Source_ThenSetThePolicyForTheItem_on_Destination(ci.Path);
}
catch (SoapException e)
{
Debug.Print("SoapException START----------------------------------------------------");
Debug.Print(e.Detail.InnerXml);
Debug.Print("SoapException END----------------------------------------------------");
}
catch (Exception ex)
{
Debug.Print("ERROR START----------------------------------------------------");
Debug.Print(ex.GetType().FullName);
Debug.Print(ex.Message);
Debug.Print("ERROR END----------------------------------------------------");
}
}
}
}

Virtual Server IIS WMI Problem

I have been tasked with finding out what is causing an issue with this bit of code:
public static ArrayList GetEthernetMacAddresses()
{
ArrayList addresses = new ArrayList();
ManagementClass mc = new ManagementClass("Win32_NetworkAdapter");
// This causes GetInstances(options)
// to return all subclasses of Win32_NetworkAdapter
EnumerationOptions options = new EnumerationOptions();
options.EnumerateDeep = true;
foreach (ManagementObject mo in mc.GetInstances(options)) {
string macAddr = mo["MACAddress"] as string;
string adapterType = mo["AdapterType"] as string;
if (!StringUtil.IsBlank(macAddr) && !StringUtil.IsBlank(adapterType))
{
if (adapterType.StartsWith("Ethernet")) {
addresses.Add(macAddr);
}
}
}
return addresses;
}
On our (Win2003) virtual servers, this works when run as part of a console application but not from a web service running on IIS (on that same machine).
Alternatively, I can use this code in a web service on IIS (on the virtual server) and get the correct return values:
public static string GetMacAddresses()
{
ManagementClass mgmt = new ManagementClass(
"Win32_NetworkAdapterConfiguration"
);
ManagementObjectCollection objCol = mgmt.GetInstances();
foreach (ManagementObject obj in objCol)
{
if ((bool)obj["IPEnabled"])
{
if (sb.Length > 0)
{
sb.Append(";");
}
sb.Append(obj["MacAddress"].ToString());
}
obj.Dispose();
}
}
Why does the second one work and not the first one?
Why only when called through an IIS web service on a virtual machine?
Any help would be appreciated.
UPDATE: After much telephone time with all different levels of MS Support, the've come to the conclusion that this is "As Designed".
Since it is on a driver level for the virtual network adapter driver, the answer was that we should change our code "to work around the issue".
This means that you cannot reliable test code on virtual servers unless you with the same code that you use on physical servers, since we can't guarantee that the servers are exact replicas...
Okay, so I wrote this code to test the issue:
public void GetWin32_NetworkAdapter()
{
DataTable dt = new DataTable();
dt.Columns.Add("AdapterName", typeof(string));
dt.Columns.Add("ServiceName", typeof(string));
dt.Columns.Add("AdapterType", typeof(string));
dt.Columns.Add("IPEnabled", typeof(bool));
dt.Columns.Add("MacAddress", typeof(string));
//Try getting it by Win32_NetworkAdapterConfiguration
ManagementClass mgmt = new ManagementClass("Win32_NetworkAdapter");
EnumerationOptions options = new EnumerationOptions();
options.EnumerateDeep = true;
ManagementObjectCollection objCol = mgmt.GetInstances(options);
foreach (ManagementObject obj in objCol)
{
DataRow dr = dt.NewRow();
dr["AdapterName"] = obj["Caption"].ToString();
dr["ServiceName"] = obj["ServiceName"].ToString();
dr["AdapterType"] = obj["AdapterType"];
dr["IPEnabled"] = (bool)obj["IPEnabled"];
if (obj["MacAddress"] != null)
{
dr["MacAddress"] = obj["MacAddress"].ToString();
}
else
{
dr["MacAddress"] = "none";
}
dt.Rows.Add(dr);
}
gvConfig.DataSource = dt;
gvConfig.DataBind();
}
When it's run on a physical IIS box I get this:
Physical IIS server http://img14.imageshack.us/img14/8098/physicaloutput.gif
Same code on Virtual IIS server:
Virtual server http://img25.imageshack.us/img25/4391/virtualoutput.gif
See a difference? It's on the first line. The virtual server doesn't return the "AdapterType" string. Which is why the original code was failing.
This brings up an interesting thought. If Virtual Server is supposed to be an "virtual" representation of a real IIS server, why doesn't it return the same values?
Why are the two returning different results? It's possible that due to the different user accounts, you'll get different results running from the console and from a service.
Why does (1) fail and (2) work? Is it possible that a null result for adapterType return a null value? If so, would the code handle this condition?