MS Exchange access by API or Web Services - web-services

I am looking for a way to access emails in Exchange. I assume that there would be needed 2007 version with SP1. As I know there is possibility to access Exchange by PowerShell or by web services. What I am interested in is to access messages from various mailboxes to get their from/to/subject/body.
I've found that by web services this is possible by using FindItem and GetItem calls. Are there also other powershell commands to get to emails?
How can I access what I need?

You should use the Exchange API for this (EWS)
Glenn Scales has explained how to use it with powershell on http://gsexdev.blogspot.nl/2012/01/ews-managed-api-and-powershell-how-to.html?m=1

I looked over the native Exchange cmdlets and was only able to find a reference to Get-Message cmdlet, but it appears that cmdlet is used to
view the details of one or more messages in a queue on a computer that has the Hub Transport server role or the Edge Transport server role installed.
But it doesn't look like the message body is returned.
If you can tolerate a paid solution (currently in Beta), my employer (CData Software) makes a set of cmdlets for working with Exchange data.
EDIT: Beta solution isn't helpful for OP.
With the limitations in the native Exchange cmdlets and the restriction for a production solution, I can offer my employer's ADO.NET Provider for Exchange. We have already published a Knowledge Base article (whose contents are copied below):
The CData ADO.NET Provider for Microsoft Exchange implements standard ADO.NET interfaces, enabling you to access the capabilities of the Microsoft Exchange API from .NET applications, such as PowerShell. The provider simplifies authentication and interaction with Microsoft Exchange data. This tutorial shows how to use some of the common ADO.NET objects to execute SQL queries directly from PowerShell.
Execute Queries
Follow the three steps below to execute any create, read, update, and delete (CRUD) command to Microsoft Exchange data in PowerShell:
Load the provider's assembly:
[Reflection.Assembly]::LoadFile("C:\Program Files\CData\CData ADO.NET Provider for Microsoft Exchange\lib\System.Data.CData.Exchange.dll")
Connect to Microsoft Exchange data. Specify the User and Password to connect to Exchange. Additionally, specify the address of the Exchange server you are connecting to and the Platform associated with the server.
$constr = "User='myUser#mydomain.onmicrosoft.com';Password='myPassword';Server='https://outlook.office365.com/EWS/Exchange.asmx';Platform='Exchange_Online';"
$conn= New-Object System.Data.CData.Exchange.ExchangeConnection($constr)
$conn.Open()
Instantiate the ExchangeDataAdapter, execute an SQL query, and output the results:
$sql="SELECT GivenName, Size from Contacts"
$da= New-Object System.Data.CData.Exchange.ExchangeDataAdapter($sql, $conn)
$dt= New-Object System.Data.DataTable
$da.Fill($dt)
$dt.Rows | foreach {
Write-Host $_.givenname $_.size
}
Update Microsoft Exchange Data
$cmd = New-Object System.Data.CData.Exchange.ExchangeCommand("UPDATE Contacts SET BusinnessAddress_City='Raleigh' WHERE Id = #myId", $conn)
$cmd.Parameters.Add(new System.Data.CData.Exchange.ExchangeParameter("myId","10456255-0015501366"))
$cmd.ExecuteNonQuery()
Insert Microsoft Exchange Data
$cmd = New-Object System.Data.CData.Exchange.ExchangeCommand("INSERT INTO Contacts SET BusinnessAddress_City='Raleigh' WHERE Id", $conn)
$cmd.Parameters.Add(new System.Data.CData.Exchange.Microsoft ExchangeParameter("myId","001d000000YBRseAAH"))
$cmd.ExecuteNonQuery()
Delete Microsoft Exchange Data
$cmd = New-Object System.Data.CData.Exchange.ExchangeCommand("DELETE FROM Contacts WHERE Id", $conn)
$cmd.Parameters.Add(new System.Data.CData.Exchange.Microsoft ExchangeParameter("myId","001d000000YBRseAAH"))
$cmd.ExecuteNonQuery()

Related

How to query devices using metadata in GCP IOT Core?

When I go to the IOT Core Registry page (on the GCP console) and select a device, I can edit it. There's a "Device metadata" section there, reading the following:
You can set custom metadata, such as manufacturer, location, etc. for the device. These
can be used to query devices in this registry. Learn more
Where the documentation page shows nothing about querying devices using metadata.
Is this possible at all? What can be done using device metadata?
I am asking because I am looking for the following features with Azure IOT Hub has with device twin tags:
Ideally I would like to enrich messages the device sends (state, events) with corresponding metadata.
Querying for multiple devices based on a metadata field.
One first has to add device meta data, before one can query it:
https://cloud.google.com/iot/docs/how-tos/devices#creating_a_device
https://cloud.google.com/iot/docs/how-tos/devices#getting_device_details
One can query gcloud iot deviceslist (--registry=REGISTRY : --region=REGION):
--filter="metadata.items.key['test_metadata'][value]='test_value'"
See gcloud topic filters for more information about filter expressions.
Or with format: --format='value[](metadata.items.test_metadata)'
It might be easier if you implement this using client libraries. Using the suggestion of #MartinZeitler list your devices then perform get for each device and then do the checking on the metadata. See Python code below for the implementation:
from google.cloud import iot_v1
def sample_list_devices(meta_key_name,meta_val_name):
# Create a client
client = iot_v1.DeviceManagerClient()
project_id="your-project-id"
location="asia-east1" #define your device location
registry="your-registry-id"
parent=f"projects/{project_id}/locations/{location}/registries/{registry}"
# Initialize request argument(s)
list_request = iot_v1.ListDevicesRequest(
parent=parent,
)
# Make the request
list_result = client.list_devices(request=list_request)
# Handle the response
for response in list_result:
device=response.num_id
get_request = iot_v1.GetDeviceRequest(
name=f"{parent}/devices/{device}",
)
get_response = client.get_device(request=get_request)
if get_response.metadata[meta_key_name]==meta_val_name:
print(get_response)
return get_response
#define metadata key and metadata value that you want to use for filtering
sample_list_devices(meta_key_name="test_key",meta_val_name="test_val")
Filtered response:
See device configuration:
No, it is not possible to query metadata the way you want it. The doc says the following about the server.
"Cloud IoT Core does not interpret or index device metadata."
As you are already aware as a client-side workaround to simulate a query search, we can list all the devices first and then filter the output by metadata.

How to call on Web Service API and route data into Azure SQL Database?

Having configured an Azure SQL Database, I would like to feed some tables with data from an HTTP REST GET call.
I have tried Microsoft Flow (whose HTTP Request action is utterly botched) and I am now exploring Azure Data Factory, to no avail.
The only way I can currently think of is provisioning an Azure VM and install Postman with Newman. But then, I would still need to create a Web Service interface to the Azure SQL Database.
Does Microsoft offer no HTTP call service to hook up to an Azure SQL Database?
Had the same situation a couple of weeks ago and I ended up building the API call management using Azure Functions. No problem to use the Azure SDK's to upload the result to e.g BLOB store or Data Lake. And you can add whatever assembly you need to perform the HTTP post operation.
From their you can easily pull it with Data Factory to a Azure SQL db.
I would suggest you write yourself an Azure Data Factory custom activity to achieve this. I've done this for a recent project.
Add a C# class library to your ADF solution and create a class that inherits from IDotNetActivity. Then in the IDictionary method make the HTTP web request to get the data. Land the downloaded file in blob storage first, then have a downstream activity to load the data into SQL DB.
public class GetLogEntries : IDotNetActivity
{
public IDictionary<string, string> Execute(
IEnumerable<LinkedService> linkedServices,
IEnumerable<Dataset> datasets,
Activity activity,
IActivityLogger logger)
{
etc...
HttpWebResponse myHttpWebResponse = (HttpWebResponse)httpWebRequest.GetResponse();
You can use the ADF linked services to authenticate against the storage account and define where container and file name you want as the output etc.
This is an example I used for data lake. But there is an almost identical class for blob storage.
Dataset outputDataset = datasets.Single(dataset => dataset.Name == activity.Outputs.Single().Name);
AzureDataLakeStoreLinkedService outputLinkedService;
outputLinkedService = linkedServices.First(
linkedService =>
linkedService.Name ==
outputDataset.Properties.LinkedServiceName).Properties.TypeProperties
as AzureDataLakeStoreLinkedService;
Don't bother with an input for the activity.
You will need an Azure Batch Service as well to handle the compute for the compiled classes. Check out my blog post on doing this.
https://www.purplefrogsystems.com/paul/2016/11/creating-azure-data-factory-custom-activities/
Hope this helps.

PowerBI Embedded: Datasource has no credentials, unable to Patch the gateway

I wanted to test out PowerBI embedded so I downloaded the the sample app that is able to publish a pbix file and to embed it.
So I created the easiest PowerBI file one is able to make with Azure SQL, using the DirectQuery option, as underlying data source.
I succesfully imported the PowerBI file in my workspace collection
I changed the connection string of my PowerBI file succesfully
After that the code to patch the gateway with the username and password credentials fails
Then when I tried to view the embedded report I got this error.
I believe the connectionstring is in the correct format because it was updated succesfully. I also already tried to point it to another SQL database and then the error shows the other SQL database in the error message.
1) I thought this could be because the Gateway does not get the credentials that I gave it is that correct?
2) Does someone know how can I fix this?
Thanks in advance!
As #Cuong Le stated, this was a Microsoft Issue at first.
When the problem was fixed I still received a BadRequest exception. After trying to update the credentials with the PowerBI-CLI the problem became clearer. I needed to grant rights for Azure IP addresses to the relevant SQL database. Once I did that I was able to update the credentials. Unfortunately PowerBI API SDK's exception messages are not as good as the PowerBI-CLI messages. I also tried it with PowerBI API SDK and it also worked.
The exception message I got was the following:
[ powerbi ] {"error":{"code":"DM_GWPipeline_Gateway_DataSourceAccessError","pbi.error":{"code":"DM_GWPipeline_Gateway_DataSourceAccessError","parameters":{},"details":[{"code":"DM_ErrorDetailNameCode_UnderlyingErrorCode","detail":{"type":1,"value":"-2146232060"}},{"code":"DM_ErrorDetailNameCode_UnderlyingErrorMessage","detail":{"type":1,"value":"Cannot open server 'engiep-dev-weeu-sql' requested by the login. Client with IP address 'xx.xx.xx.213' is not allowed to access the server. To enable access, use the Windows Azure Management Portal or run sp_set_firewall_rule on the master database to create a firewall rule for this IP address or address range. It may take up to five minutes for this change to take effect."}},{"code":"DM_ErrorDetailNameCode_UnderlyingHResult","detail":{"type":1,"value":"-2146232060"}},{"code":"DM_ErrorDetailNameCode_UnderlyingNativeErrorCode","detail":{"type":1,"value":"40615"}}]}}}
The correct connectionstring format to use is:
Data Source=yourDataSource;Initial Catalog=yourDataBase;User ID=yourUser;Password=yourPass;
(Don't use quotes anywhere.)
I was experiencing the same issue. Also it is an open issue on github.
Attached Image :
enter image description here
To solve this, I used the PowerBI Cli 1.0.4 from NPM. And used Update Connection Operation,(remember to add -d).
powerbi update-connection -c [workspace name] -k [access key] -w [workspace id] -d [dataset id] -s "Data Source=xxx.database.windows.net;Initial Catalog=xxx;User ID=xxx;Password=xxx"
If it fails do it(Update-Connection Operation) again.
The issue happens since sometimes datasource credentials are not carried over to the workspace.
In the case of reports that use direct query, credentials are never brought with the pbix as an import is done. All private info are stripped out.
Hope this helps!
Thanks

How to change client schema during provisioning?

I'm rushing (never a good thing) to get Sync Framework up and running for a "offline support" deadline on my project. We have a SQL Express 2008 instance on our server and then will deploy SQLCE to the clients. Clients will only sync with server, no peer-to-peer.
So far I have the following working:
Server schema setup
Scope created and tested
Server provisioned
Client provisioned w/ table creation
I've been very impressed with the relative simplicity of all of this. Then I realized the following:
Schema created through client provisioning to SQLCE does not setup default values for uniqueidentifier types.
FK constraints are not created on client
Here is the code that is being used to create the client schema (pulled from an example I found somewhere online)
static void Provision()
{
SqlConnection serverConn = new SqlConnection(
"Data Source=xxxxx, xxxx; Database=xxxxxx; " +
"Integrated Security=False; Password=xxxxxx; User ID=xxxxx;");
// create a connection to the SyncCompactDB database
SqlCeConnection clientConn = new SqlCeConnection(
#"Data Source='C:\SyncSQLServerAndSQLCompact\xxxxx.sdf'");
// get the description of the scope from the SyncDB server database
DbSyncScopeDescription scopeDesc = SqlSyncDescriptionBuilder.GetDescriptionForScope(
ScopeNames.Main, serverConn);
// create CE provisioning object based on the scope
SqlCeSyncScopeProvisioning clientProvision = new SqlCeSyncScopeProvisioning(clientConn, scopeDesc);
clientProvision.SetCreateTableDefault(DbSyncCreationOption.CreateOrUseExisting);
// starts the provisioning process
clientProvision.Apply();
}
When Sync Framework creates the schema on the client I need to make the additional changes listed earlier (default values, constraints, etc.).
This is where I'm getting confused (and frustrated):
I came across a code example that shows a SqlCeClientSyncProvider that has a CreatingSchema event. This code example actually shows setting the RowGuid property on a column which is EXACTLY what I need to do. However, what is a SqlCeClientSyncProvider?! This whole time (4 days now) I've been working with SqlCeSyncProvider in my sync code. So there is a SqlCeSyncProvider and a SqlCeClientSyncProvider?
The documentation on MSDN is not very good in explaining what either of these.
I've further confused whether I should make schema changes at provision time or at sync time?
How would you all suggest that I make schema changes to the client CE schema during provisioning?
SqlCeSyncProvider and SqlCeClientSyncProvider are different.
The latter is what is commonly referred to as the offline provider and this is the provider used by the Local Database Cache project item in Visual Studio. This provider works with the DbServerSyncProvider and SyncAgent and is used in hub-spoke topologies.
The one you're using is referred to as a collaboration provider or peer-to-peer provider (which also works in a hub-spoke scenario). SqlCeSyncProvider works with SqlSyncProvider and SyncOrchestrator and has no corresponding Visual Studio tooling support.
both providers requires provisioning the participating databases.
The two types of providers provisions the sync objects required to track and apply changes differently. The SchemaCreated event applies to the offline provider only. This get's fired the first time a sync is initiated and when the framework detects that the client database has not been provisioned (create user tables and the corresponding sync framework objects).
the scope provisioning used by the other provider dont apply constraints other than the PK. so you will have to do a post-provisioning step to apply the defaults and constraints yourself outside of the framework.
While researching solutions without using SyncAgent I found that the following would also work (in addition to my commented solution above):
Provision the client and let the framework create the client [user] schema. Now you have your tables.
Deprovision - this removes the restrictions on editing the tables/columns
Make your changes (in my case setting up Is RowGuid on PK columns and adding FK constraints) - this actually required me to drop and add a column as you can't change the "Is RowGuid" property an existing columns
Provision again using DbSyncCreationOption.CreateOrUseExisting

Searching for E-mails with Exchange Web Services Operations

I am doing an integration project for a customer running Microsoft Exchange 2007 (BPOS). I am looking for a way to search for e-mail using the Exchange Web Services Operations -- MS' API to their own hosted exchange solution. So far, I have found a nice API description, but as far as I can see none of it allows for searching for e-mails using different criteria. In this case, I need to find all e-mails that contains a specific sender or recipient identified by an e-mail address.
Could you provide me with guidance on how to do this? Thanks.
In my (admittedly minimal) experience with Exchange Web Services, the only way to do this would be to retrieve all items in a folder and scan through their properties.
You need to specify which properties are retrieved when you call the FindItem() operation.
PathToUnindexedFieldType fieldTypePath = new PathToUnindexedFieldType();
fieldTypePath.FieldURI = UnindexedFieldURIType.folderDisplayName;
GetFolderType folderType = new GetFolderType();
folderType.FolderShape = new FolderResponseShapeType();
folderType.FolderShape.BaseShape = DefaultShapeNamesType.IdOnly;
folderType.FolderShape.AdditionalProperties = new BasePathToElementType[1];
folderType.FolderShape.AdditionalProperties[0] = fieldTypePath;
So the only saving grace is that you don't need to retrieve the full email body etc - just the fields you explicitly require.