Embarcadero C++ Builder 11.2 Architect.
I trying to access the Multi-Tenant information in my RAD Server programmatically. Access is not provided via EMSInternalAPI to get at that information, so I tried the following:
The .dfm file, localhost is the remote server running IIS and Rad Server:
object TenantModuleResource: TTenantModuleResource
Height = 258
Width = 441
object qryTenants: TFDQuery
Connection = TenantConection
Left = 72
Top = 40
end
object FDStanStorageJSONLink: TFDStanStorageJSONLink
Left = 272
Top = 128
end
object FDPhysIBDriverLink: TFDPhysIBDriverLink
Left = 272
Top = 48
end
object TenantConection: TFDConnection
Params.Strings = (
'Server=localhost'
'User_Name=sysdba'
'Password=masterkey'
'Database=C:\Data\emsserver.ib'
'InstanceName=gds_db'
'Port=3050'
'DriverID=IB')
Left = 72
Top = 128
end
end
The code:
void TTenantModuleResource::Get(TEndpointContext* AContext, TEndpointRequest* ARequest, TEndpointResponse* AResponse)
{
std::unique_ptr<TMemoryStream> oStr(new TMemoryStream());
qryTenants->Close();
qryTenants->SQL->Text = "SELECT tenantid, tenantname FROM tenants";
qryTenants->Open();
qryTenants->SaveToStream(oStr.get(), TFDStorageFormat::sfJSON);
AResponse->Body->SetStream(oStr.release(), "application/json", true);
}
static void Register()
{
std::unique_ptr<TEMSResourceAttributes> attributes(new TEMSResourceAttributes());
attributes->ResourceName = "tenants";
RegisterResource(__typeinfo(TTenantModuleResource), attributes.release());
}
and ended getting a log error of unknown "...Win32 error 10060" which is a timeout from what I can tell. I've seen where the Interbase docs suggest that there is no client license when that error is thrown.
I have the RAD Server Site license, but not the client license, however I would like to have the ability to work with the tenant records without using the Multi-Tenant Console app.
My questions is does anyone know of a way to programmatically get to the Tenant data in the emserver.ib database?
Related
I am trying to download the image from Dataverse using Dynamics Web API.
I am able to succeed in that using {{webapiurl}}sample_imageattributedemos(d66ecb6c-4fd1-ec11-a7b5-6045bda5603f)/entityimage/$value
But when I try to download the full/actual size image - I am getting the file with the reduced size - {{webapiurl}}sample_imageattributedemos(d66ecb6c-4fd1-ec11-a7b5-6045bda5603f)/entityimage/$value?fullsize=true.
I tried to download the image using the sample code where additionally I have added CanStoreFullImage = true attribute.
Please find below code snippet for the reference:
CreateAttributeRequest createEntityImageRequest = new CreateAttributeRequest
{
EntityName = _customEntityName.ToLower(),
Attribute = new ImageAttributeMetadata
{
SchemaName = "EntityImage", //The name is always EntityImage
//Required level must be AttributeRequiredLevel.None
RequiredLevel = new AttributeRequiredLevelManagedProperty(AttributeRequiredLevel.None),
DisplayName = new Microsoft.Xrm.Sdk.Label("Image", 1033),
Description = new Microsoft.Xrm.Sdk.Label("An image to show with this demonstration.", 1033),
CanStoreFullImage = true,
IsPrimaryImage = false,
}
};
How can I achieve this - to download the full size image using Web API?
the correct syntax is size=full, not fullsize=true
to build such requests you can use my tool Dataverse REST Builder, you can find the operations to deal with Image fields under the Manage Image Data request type
I'm using Sync Framework v2.1 configured in a hub <--> spoke fashion.
Hub: SQL Server 2012 using SqlSyncProvider.
Spokes: LocalDb 2012 using SqlSyncProvider. Each spokes' database begins as a restored backup from the server, after which PostRestoreFixup is executed against it. In investigating this, I've also tried starting with an empty spoke database whose schema and data are created through provisioning and an initial, download-only sync.
Assume two spokes (A & B) and a central hub (let's call it H). They each have one table with one record and they're all in sync.
Spoke A changes the record and syncs, leaving A & H with identical records.
Spoke B changes the same record and syncs, resulting in a conflict with the change made in step #1. B's record is overwritten with H's, and H's record remains as-is. This is the expected/desired result. However, the SyncOperationStatistics returned by the orchestrator suggest changes are made at H. I've tried both SyncDirectionOrder directions, with these results:
- DownloadAndUpload (H's local_update_peer_timestamp and last_change_datetime are updated) -->
* Download changes total: 1
* Download changes applied: 1
* Download changed failed: 0
* Upload changes total: 1
* Upload changes applied: 1
* Upload changed failed: 0
- UploadAndDownload (H's local_update_peer_timestamp is updated)-->
* Upload changes total: 1
* Upload changes applied: 1
* Upload changed failed: 0
* Download changes total: 1
* Download changes applied: 1
* Download changed failed: 0
And, indeed, when Spoke A syncs again the record is downloaded from H, even though H's record hasn't changed. Why?
The problem arising from this is, for example, if Spoke A makes another change to the record between steps #2 and 3, that change will (falsely) be flagged as a conflict and will be overwritten at step #3.
Here's the pared-down code demonstrating the issue or, rather, my question. Note that I've implemented the provider's ApplyChangeFailed handlers such that the server wins, regardless of the SyncDirectionOrder:
private const string ScopeName = "TestScope";
private const string TestTable = "TestTable";
public static SyncOperationStatistics Synchronize(SyncEndpoint local,SyncEndpoint remote, EventHandler<DbSyncProgressEventArgs> eventHandler)
{
using (var localConn = new SqlConnection(local.ConnectionString))
using (var remoteConn = new SqlConnection(remote.ConnectionString))
{
// provision the remote server if necessary
//
var serverProvision = new SqlSyncScopeProvisioning(remoteConn);
if (!serverProvision.ScopeExists(ScopeName))
{
var serverScopeDesc = new DbSyncScopeDescription(ScopeName);
var serverTableDesc = SqlSyncDescriptionBuilder.GetDescriptionForTable(TestTable, remoteConn);
serverScopeDesc.Tables.Add(serverTableDesc);
serverProvision.PopulateFromScopeDescription(serverScopeDesc);
serverProvision.Apply();
}
// provision locally (localDb), if necessary, bringing down the server's scope
//
var clientProvision = new SqlSyncScopeProvisioning(localConn);
if (!clientProvision.ScopeExists(ScopeName))
{
var scopeDesc = SqlSyncDescriptionBuilder.GetDescriptionForScope(ScopeName, remoteConn);
clientProvision.PopulateFromScopeDescription(scopeDesc);
clientProvision.Apply();
}
// create\initialize the sync providers and go for it...
//
using (var localProvider = new SqlSyncProvider(ScopeName, localConn))
using (var remoteProvider = new SqlSyncProvider(ScopeName, remoteConn))
{
localProvider.SyncProviderPosition = SyncProviderPosition.Local;
localProvider.SyncProgress += eventHandler;
localProvider.ApplyChangeFailed += LocalProviderOnApplyChangeFailed;
remoteProvider.SyncProviderPosition = SyncProviderPosition.Remote;
remoteProvider.SyncProgress += eventHandler;
remoteProvider.ApplyChangeFailed += RemoteProviderOnApplyChangeFailed;
var syncOrchestrator = new SyncOrchestrator
{
LocalProvider = localProvider,
RemoteProvider = remoteProvider,
Direction = SyncDirectionOrder.UploadAndDownload // also issue with DownloadAndUpload
};
return syncOrchestrator.Synchronize();
}
}
}
private static void RemoteProviderOnApplyChangeFailed(object sender, DbApplyChangeFailedEventArgs e)
{
// ignore conflicts at the server
//
e.Action = ApplyAction.Continue;
}
private static void LocalProviderOnApplyChangeFailed(object sender, DbApplyChangeFailedEventArgs e)
{
// server wins, force write at each client
//
e.Action = ApplyAction.RetryWithForceWrite;
}
To reiterate, using this code along w/the configuration described at the outset, conflicting rows are, as expected, overwritten on the spoke containing the conflict and the server's version of that row remains as-is (unchanged). However, I'm seeing that each conflict results in an update to the server's xxx_tracking table, specifically the local_update_peer_timestamp and last_change_datetime fields. This, I'm guessing, results in a download to every other spoke even though the server's data hasn't really changed. This seems unnecessary and is, to me, counter-intuitive.
I have a webserver configured with ColdFusion 10. Within an application I have built in ColdFusion, I want to deploy a Crystal Report that requires a parameter that the user would enter. I built the report in Crystal Reports 2011. The report works within the Designer.
I then used Recrystallize to generate the ASPX, ASPX.VB, and Web.config pages that go with the report.
I had to adjust the IIS settings to accommodate the fact that ColdFusion requires the enabling of 32 bit applications and the Crystal Reports components require the disabling of 32 bit applications by putting the Crystal Report and pages in their own folder, converting them to an application and setting that application to a different Application Pool than the ColdFusion application.
The report viewer initially opened with the prompt for the parameter that the report was built on. When you entered the parameter and clicked OK, the report would error with a dialog of: Failed to open the connection. Failed to open the connection. [with the report name].
I am not sure where to begin troubleshooting this.
Any help that you can provide would be greatly appreciated.
this is aspx file.....
<asp:UpdatePanel ID="updpnlReport" runat="server">
<ContentTemplate>
<CR:CrystalReportViewer ID="crvAccountReportParameter" runat="server"
oninit="crvAccountReportParameter_Init"
EnableParameterPrompt="False" HasToggleParameterPanelButton = "false" HasCrystalLogo ="False"/>
</ContentTemplate>
</asp:UpdatePanel>
This is .cs fie..........
protected void btnSubmit_Click(object sender, EventArgs e)
{
LoadData();
}
protected void LoadData()
{
string pstrType;
pstrType = Request.QueryString["Type"];
string strCompanyName = objSession.SelCompanyName;
string strBranchName = objSession.SelBranchName;
string strHeading = "";
DataSet dsData = null;
dsData = objAccountReportBAL.getAccountRegister(Convert.ToInt16(objSession.FyId), int.MinValue, long.MinValue, Convert.ToDateTime(RadDtpFromDate.SelectedDate), Convert.ToDateTime(RadDtpToDate.SelectedDate), pstrType);
dsData.Tables[0].TableName = "Account_Trn_v";
if (pstrType == "JV")
{
strHeading = "Journal Voucher Register Report";
rptDoc.Load(Server.MapPath("~/ReportCrystal/Account/Detail/GeneralVoucharRegister.rpt"));
}
rptDoc.SetDataSource(dsData.Tables[0]);
rptDoc.SetParameterValue("#CompanyName", objSession.SelCompanyName);
rptDoc.SetParameterValue("#BranchName", objSession.SelBranchName);
rptDoc.SetParameterValue("#Heading", strHeading);
rptDoc.SetParameterValue("#Stdate", RadDtpFromDate.SelectedDate);
rptDoc.SetParameterValue("#EnDate", RadDtpToDate.SelectedDate);
crvAccountReportParameter.ReportSource = rptDoc;
crvAccountReportParameter.DataBind();
}
I have PHP application that contain three small applications. Each application have own users and they are unique for all system. I have problem with session management. When one user is logged in server.com/app1 and write server.com/app2 second application log in automaticaly with this user. But this user hasn't any rights on this application. In login page I do this:
$status = $user->status;
if($status != 4) {
$auth_key = session_encrypt($userdata, $passdata);
$SQL = "UPDATE customer SET auth_key = '$auth_key'
WHERE username = '$userdata' ";
$auth_query = mysql_db_query($db, $SQL);
setcookie("auth_key", $auth_key, time() + 60 * 60 * 24 * 7, "/app1", "server.com", false, true);
// Assign variables to session
session_regenerate_id(true);
$session_id = $user->id;
$session_username = $userdata;
$_SESSION['cid'] = $session_id;
$_SESSION['username'] = $session_username;
$_SESSION['status'] = $status;
$_SESSION['user_lastactive'] = time();
header("Location: index.php");
exit;
}
But this doesn't work. Can someone help me how to repair my sessions. Thanks :)
If I'm reading your question correctly, your problem is that your three apps are independent but are hosted on the same server/use the same php instance. This results in their using the same php session, and the latter gets filled up with inappropriate garbage.
You've several potential solutions:
The first and easiest is to prefix your sessions in the way or another, i.e. use $_SESSION['app1']['param'] or $_SESSION['app1_param'] rather than $_SESSION['param'].
Another, if you've php installed as cgi rather than as an Apache module, is to configure each individual apps' php.ini in such a way that they're no longer sharing their session_id (i.e. configure the session cookie name and/or path) nor storing the session data in the same location (which is somewhere in /tmp if I recall correctly).
If you would like your sessions to be handled independently by each app then it might be easier to just set the unique sessionid for each app in the cookie.
setcookie("auth_key", $auth_key, time() + 60 * 60 * 24 * 7, "/app1", "server.com", false, true);
setcookie("auth_key", $auth_key, time() + 60 * 60 * 24 * 7, "/app2", "server.com", false, true);
setcookie("auth_key", $auth_key, time() + 60 * 60 * 24 * 7, "/app3", "server.com", false, true);
I'm researching a way to build an n-tierd sync solution. From the WebSharingAppDemo-CEProviderEndToEnd sample it seems almost feasable however for some reason, the app will only sync if the client has a live SQL db connection. Can some one explain what I'm missing and how to sync without exposing SQL to the internet?
The problem I'm experiencing is that when I provide a Relational sync provider that has an open SQL connection from the client, then it works fine but when I provide a Relational sync provider that has a closed but configured connection string, as in the example, I get an error from the WCF stating that the server did not receive the batch file. So what am I doing wrong?
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder();
builder.DataSource = hostName;
builder.IntegratedSecurity = true;
builder.InitialCatalog = "mydbname";
builder.ConnectTimeout = 1;
provider.Connection = new SqlConnection(builder.ToString());
// provider.Connection.Open(); **** un-commenting this causes the code to work**
//create anew scope description and add the appropriate tables to this scope
DbSyncScopeDescription scopeDesc = new DbSyncScopeDescription(SyncUtils.ScopeName);
//class to be used to provision the scope defined above
SqlSyncScopeProvisioning serverConfig = new SqlSyncScopeProvisioning();
....
The error I get occurs in this part of the WCF code:
public SyncSessionStatistics ApplyChanges(ConflictResolutionPolicy resolutionPolicy, ChangeBatch sourceChanges, object changeData)
{
Log("ProcessChangeBatch: {0}", this.peerProvider.Connection.ConnectionString);
DbSyncContext dataRetriever = changeData as DbSyncContext;
if (dataRetriever != null && dataRetriever.IsDataBatched)
{
string remotePeerId = dataRetriever.MadeWithKnowledge.ReplicaId.ToString();
//Data is batched. The client should have uploaded this file to us prior to calling ApplyChanges.
//So look for it.
//The Id would be the DbSyncContext.BatchFileName which is just the batch file name without the complete path
string localBatchFileName = null;
if (!this.batchIdToFileMapper.TryGetValue(dataRetriever.BatchFileName, out localBatchFileName))
{
//Service has not received this file. Throw exception
throw new FaultException<WebSyncFaultException>(new WebSyncFaultException("No batch file uploaded for id " + dataRetriever.BatchFileName, null));
}
dataRetriever.BatchFileName = localBatchFileName;
}
Any ideas?
For the Batch file not available issue, remove the IsOneWay=true setting from IRelationalSyncContract.UploadBatchFile. When the Batch file size is big, ApplyChanges will be called even before fully completing the previous UploadBatchfile.
// Replace
[OperationContract(IsOneWay = true)]
// with
[OperationContract]
void UploadBatchFile(string batchFileid, byte[] batchFile, string remotePeer1
I suppose it's simply a stupid example. It exposes "some" technique but assumes you have to arrange it in proper order by yourself.
http://msdn.microsoft.com/en-us/library/cc807255.aspx