failed to match connection hostname (api.shinyapps.io) against server certificate names - shiny

when deploying a shiny app, rstudio ide shows the error:
Error in curl::curl_fetch_memory(url, handle = handle) :
schannel: CertGetNameString() failed to match connection hostname (api.shinyapps.io) against server certificate names
Calls: ... tryCatch -> tryCatchList -> tryCatchOne ->
Timing stopped at: 0.02 0.02 0.8
Execution halted
this is for windows 7(64bit).
the details:
----- Deployment log started at 2019-08-03 01:42:06 -----
Deploy command:
rsconnect::deployApp(appDir = "D:/shinyTest/_app002", appFileManifest = "C:/Users/ADMINI~1/AppData/Local/Temp/2013-805c-ca6a-04c2", account = "r00", server = "shinyapps.io", appName = "_app002", appTitle = "_app002", launch.browser = function(url) { message("Deployment completed: ", url) }, lint = FALSE, metadata = list(asMultiple = FALSE, asStatic = FALSE), logLevel = "verbose")
Session information:
R version 3.6.1 (2019-07-05)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
Matrix products: default
locale:
[1] LC_COLLATE=Chinese (Simplified)_People's Republic of China.936
[2] LC_CTYPE=Chinese (Simplified)_People's Republic of China.936
[3] LC_MONETARY=Chinese (Simplified)_People's Republic of China.936
[4] LC_NUMERIC=C
[5] LC_TIME=Chinese (Simplified)_People's Republic of China.936
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] compiler_3.6.1 rsconnect_0.8.15
Cookies:
[1] "None"
----- Deployment error -----
Error in curl::curl_fetch_memory(url, handle = handle) :
schannel: CertGetNameString() failed to match connection hostname (api.shinyapps.io) against server certificate names
Calls: <Anonymous> ... tryCatch -> tryCatchList -> tryCatchOne -> <Anonymous>
----- Error stack trace -----
20: stop(e)
19: value[[3L]](cond)
18: tryCatchOne(expr, names, parentenv, handlers[[1L]])
17: tryCatchList(expr, classes, parentenv, handlers)
16: tryCatch({
response <- curl::curl_fetch_memory(url, handle = handle)
}, error = function(e, ...) {
if (identical(e$message, "Callback aborted") || identical(e$message,
"transfer closed with outstanding read data remaining"))
return(NULL)
else stop(e)
})
15: system.time(gcFirst = FALSE, tryCatch({
response <- curl::curl_fetch_memory(url, handle = handle)
}, error = function(e, ...) {
if (identical(e$message, "Callback aborted") || identical(e$message,
"transfer closed with outstanding read data remaining"))
return(NULL)
else stop(e)
}))
14: http(service$protocol, service$host, service$port, method, url,
headers, timeout = timeout, certificate = certificate)
13: httpInvokeRequest(..., http = httpFunction())
12: httpRequest(service, authInfo, "GET", path, query, headers, timeout)
11: GET(service, authInfo, path, queryWithList)
10: grepl(contentType, response$contentType, fixed = TRUE)
9: isContentType(response, "application/json")
8: handleResponse(GET(service, authInfo, path, queryWithList))
7: listRequest(service, authInfo, path, query, "applications")
6: client$listApplications(accountInfo$accountId, filters = list(name = name))
5: getAppByName(client, accountInfo, target$appName)
4: applicationForTarget(client, accountDetails, target, forceUpdate)
3: force(code)
2: withStatus(paste0("Preparing to deploy ", assetTypeName), {
application <- applicationForTarget(client, accountDetails,
target, forceUpdate)
})
1: rsconnect::deployApp(appDir = "D:/shinyTest/_app002",
appFileManifest = "C:/Users/ADMINI~1/AppData/Local/Temp/2013-805c-ca6a-04c2",
account = "r00", server = "shinyapps.io", appName = "_app002",
appTitle = "_app002", launch.browser = function(url) {
message("Deployment completed: ", url)
}, lint = FALSE, metadata = list(asMultiple = FALSE, asStatic = FALSE),
logLevel = "verbose")
Error in curl::curl_fetch_memory(url, handle = handle) :
schannel: CertGetNameString() failed to match connection hostname (api.shinyapps.io) against server certificate names
Calls: <Anonymous> ... tryCatch -> tryCatchList -> tryCatchOne -> <Anonymous>
Timing stopped at: 0.02 0.02 0.8
Execution halted
can anyone help me? thx!

I had this issue just yesterday and saw the same error when trying to deploy an app. If you're like me, you searched and found that similar errors are observed if you have a proxy server in the way. With that being said, I am assuming that you are not behind a proxy server. If you are, follow some of what is said elsewhere.
I am also assuming that you have previously setup your account via rsconnect::setAccountInfo(). If not, follow the instructions on the rsconnect documentation.
If all that is set, note the text of the error specifies that the issue is actually an error in the curl package:
Error in curl::curl_fetch_memory(url, handle = handle) : schannel: CertGetNameString() failed to match connection hostname
(api.shinyapps.io) against server certificate names
The simple answer would be to update specifically the curl package: install.packages('curl'). That worked for me... does that help?

Related

Error: Deployment Failed while using Matic Test Network

I am getting started with Application development on Matic. And I am following the instruction as provided on the docs https://docs.matic.network/docs/develop/getting-started
But I faced problem while using truffle. After I run the command
truffle migrate --network matic
The error as follow:
Compiling your contracts...
===========================
> Everything is up to date, there is nothing to compile.
Starting migrations...
======================
> Network name: 'matic'
> Network id: 80001
> Block gas limit: 20000000 (0x1312d00)
1_initial_migration.js
======================
Deploying 'Migrations'
----------------------
Error: *** Deployment Failed ***
"Migrations" -- insufficient funds for gas * price + value.
at /usr/local/lib/node_modules/truffle/build/webpack:/packages/deployer/src/deployment.js:365:1
at process._tickCallback (internal/process/next_tick.js:68:7)
Truffle v5.1.55 (core: 5.1.55)
Node v10.19.0
The configuration file of truffle as follow:
const HDWalletProvider = require('truffle-hdwallet-provider');
const fs = require('fs');
const mnemonic = fs.readFileSync(".secret").toString().trim();
module.exports = {
networks: {
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "*", // Any network (default: none)
},
matic: {
provider: () => new HDWalletProvider(mnemonic, `https://rpc-mumbai.matic.today`),
network_id: 80001,
confirmations: 2,
timeoutBlocks: 200,
skipDryRun: true
},
},
// Set default mocha options here, use special reporters etc.
mocha: {
// timeout: 100000
},
// Configure your compilers
compilers: {
solc: {
}
}
}
It worked fine for the develop network using
truffle develop
Can someone tell me how to overcome error while using Matic Test Network?
Make sure you have enough MATIC for the transaction, you can get some from here and transfer them to the account[0]. I had issues deploying with truffle as all my tokens were at account[1] rather than account[0]
in my case it was ETH, ensuring i had tokens in the first account did the trick

AWS X-Ray, Dotnet Core 3.1, X-Ray Daemon Locally

We are trying to get X-Ray trace data from a local dotnet core 3.1 app sending trace data to a local X-Ray Daemon. As a start, we've created a generic web api and added swagger (just to make testing easier).
Startup.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.HttpsPolicy;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using Microsoft.OpenApi.Models;
using Amazon.XRay.Recorder.Core;
using log4net;
using log4net.Config;
using System.Reflection;
using System.IO;
using Amazon;
using System.Net;
using Amazon.XRay.Recorder.Core.Internal.Utils;
using Amazon.XRay.Recorder.Core.Sampling.Local;
namespace AWS_XRay
{
public class Startup
{
public static ILog log;
static Startup() // create log4j instance
{
var logRepository = LogManager.GetRepository(Assembly.GetEntryAssembly());
XmlConfigurator.Configure(logRepository, new FileInfo("log4net.config"));
log = LogManager.GetLogger(typeof(Startup));
AWSXRayRecorder.RegisterLogger(LoggingOptions.Log4Net);
}
public Startup(IConfiguration configuration)
{
Configuration = configuration;
Environment.SetEnvironmentVariable("AWS_XRAY_DAEMON_ADDRESS", "127.0.0.1:2000");
Environment.SetEnvironmentVariable("AWS_XRAY_CONTEXT_MISSING", "LOG_ERROR");
var recorder = new AWSXRayRecorderBuilder().WithSamplingStrategy(newLocalizedSamplingStrategy("sampling-rules.json")).Build();
AWSXRayRecorder.InitializeInstance(configuration, recorder);
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
// Register the Swagger generator, defining 1 or more Swagger documents
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1" });
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseXRay("WeatherForecast");
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1");
c.RoutePrefix = string.Empty;
});
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
}
}
Then we decorated the controller with the relevant or what we think is relevant
WeatherController
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Amazon.XRay.Recorder.Core;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
namespace AWS_XRay.Controllers
{
[ApiController]
[Route("[controller]")]
public class WeatherForecastController : ControllerBase
{
private static readonly string[] Summaries = new[]
{
"Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};
private readonly ILogger<WeatherForecastController> _logger;
public WeatherForecastController(ILogger<WeatherForecastController> logger)
{
_logger = logger;
}
[HttpGet]
[Route("GetWeather")]
public async Task<IActionResult> WeatherForecast()
{
AWSXRayRecorder.Instance.BeginSegment("weatherget"); // generates `TraceId` for you
try
{
var rng = new Random();
var result = Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
Date = DateTime.Now.AddDays(index),
TemperatureC = rng.Next(-20, 55),
Summary = Summaries[rng.Next(Summaries.Length)]
})
.ToArray();
// can create custom subsegments
return Ok(result);
}
catch (Exception e)
{
AWSXRayRecorder.Instance.AddException(e);
return StatusCode(500, e);
}
finally
{
AWSXRayRecorder.Instance.EndSegment();
}
}
}
}
When running the application, looking at the logs. This is what we see...
*sdk-log.txt"
2020-04-14 16:04:21,740 [1] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Initializing with custom sampling configuration : sampling-rules.json
2020-04-14 16:04:22,035 [1] DEBUG Amazon.XRay.Recorder.Core.Internal.Utils.IPEndPointExtension - Determined that 127.0.0.1:2000 is an IP.
2020-04-14 16:04:22,039 [1] INFO Amazon.XRay.Recorder.Core.Internal.Utils.IPEndPointExtension - Using custom daemon address for UDP and TCP: 127.0.0.1:2000
2020-04-14 16:04:22,042 [1] DEBUG Amazon.XRay.Recorder.Core.Strategies.DefaultExceptionSerializationStrategy - Setting max stack frame size : 50
2020-04-14 16:04:22,073 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - Context missing mode : RUNTIME_ERROR
2020-04-14 16:04:22,073 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - AWS_XRAY_CONTEXT_MISSING environment variable is set to LOG_ERROR. Override local value.
2020-04-14 16:04:22,078 [1] DEBUG Amazon.XRay.Recorder.Core.Internal.Utils.IPEndPointExtension - Determined that 127.0.0.1:2000 is an IP.
2020-04-14 16:04:22,078 [1] INFO Amazon.XRay.Recorder.Core.Internal.Utils.IPEndPointExtension - Using custom daemon address for UDP and TCP: 127.0.0.1:2000
2020-04-14 16:04:22,078 [1] DEBUG Amazon.XRay.Recorder.Core.Strategies.DefaultExceptionSerializationStrategy - Setting max stack frame size : 50
2020-04-14 16:04:22,078 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - Context missing mode : RUNTIME_ERROR
2020-04-14 16:04:22,078 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - AWS_XRAY_CONTEXT_MISSING environment variable is set to LOG_ERROR. Override local value.
2020-04-14 16:04:22,078 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorder - Using custom X-Ray recorder.
2020-04-14 16:04:22,079 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - Context missing mode : RUNTIME_ERROR
2020-04-14 16:04:22,080 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - AWS_XRAY_CONTEXT_MISSING environment variable is set to LOG_ERROR. Override local value.
2020-04-14 16:04:22,899 [4] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:22,911 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /index.html, method = GET
2020-04-14 16:04:23,393 [4] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:23,394 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /swagger/v1/swagger.json, method = GET
2020-04-14 16:04:27,497 [4] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:27,499 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /WeatherForecast/GetWeather, method = GET
2020-04-14 16:04:27,602 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = , path = , method =
2020-04-14 16:04:29,740 [4] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:29,741 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /WeatherForecast/GetWeather, method = GET
2020-04-14 16:04:29,745 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = , path = , method =
2020-04-14 16:04:30,149 [13] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:30,150 [13] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /WeatherForecast/GetWeather, method = GET
2020-04-14 16:04:30,152 [13] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = , path = , method =
2020-04-14 16:04:30,346 [4] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:30,346 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /WeatherForecast/GetWeather, method = GET
2020-04-14 16:04:30,349 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = , path = , method =
2020-04-14 16:04:30,517 [13] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:30,518 [13] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /WeatherForecast/GetWeather, method = GET
2020-04-14 16:04:30,529 [13] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = , path = , method =
2020-04-14 16:30:02,682 [1] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Initializing with custom sampling configuration : sampling-rules.json
Question 1
Based on the output in the config file, is there any trace data being sent to the daemon? We can't see any errors from the output, log level is set to DEBUG. Can't definitively say it is sending trace data to although no errors.
Daemon Config & Logs
cfg.yaml
# Maximum buffer size in MB (minimum 3). Choose 0 to use 1% of host memory.
TotalBufferSizeMB: 0
# Maximum number of concurrent calls to AWS X-Ray to upload segment documents.
Concurrency: 8
# Send segments to AWS X-Ray service in a specific region
Region: "eu-west-1"
# Change the X-Ray service endpoint to which the daemon sends segment documents.
Endpoint: "xray.eu-west-1.amazonaws.com"
Socket:
# Change the address and port on which the daemon listens for UDP packets containing segment documents.
UDPAddress: "127.0.0.1:2000"
# Change the address and port on which the daemon listens for HTTP requests to proxy to AWS X-Ray.
TCPAddress: "127.0.0.1:2000"
Logging:
LogRotation: true
# Change the log level, from most verbose to least: dev, debug, info, warn, error, prod (default).
LogLevel: "dev"
# Output logs to the specified file path.
LogPath: "xray.log"
# Turn on local mode to skip EC2 instance metadata check.
LocalMode: true
# Amazon Resource Name (ARN) of the AWS resource running the daemon.
ResourceARN: ""
# Assume an IAM role to upload segments to a different account.
RoleARN: "************************"
# Disable TLS certificate verification.
NoVerifySSL: false
# Upload segments to AWS X-Ray through a proxy.
ProxyAddress: ""
# Daemon configuration file format version.
Version: 2
Looking at the log file
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Skipped telemetry data as no segments found
2020-04-14T16:35:40+02:00 [Debug] telemetry: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] processor: done!
2020-04-14T16:35:40+02:00 [Debug] Trace segment: received: 0, truncated: 0, processed: 0
2020-04-14T16:35:40+02:00 [Debug] Shutdown finished. Current epoch in nanoseconds: 1586874940496183800
2020-04-14T16:35:42+02:00 [Info] Initializing AWS X-Ray daemon 3.2.0
2020-04-14T16:35:42+02:00 [Debug] Listening on UDP 127.0.0.1:2000
2020-04-14T16:35:42+02:00 [Info] Using buffer memory limit of 80 MB
2020-04-14T16:35:42+02:00 [Info] 1280 segment buffers allocated
2020-04-14T16:35:42+02:00 [Debug] Using Endpoint read from Config file: xray.eu-west-1.amazonaws.com
2020-04-14T16:35:42+02:00 [Debug] Using proxy address:
2020-04-14T16:35:42+02:00 [Debug] Fetch region eu-west-1 from commandline/config file
2020-04-14T16:35:42+02:00 [Info] Using region: eu-west-1
2020-04-14T16:35:42+02:00 [Debug] ARN of the AWS resource running the daemon:
2020-04-14T16:35:42+02:00 [Debug] No Metadata set for telemetry records
2020-04-14T16:35:42+02:00 [Debug] Using Endpoint: https://xray.eu-west-1.amazonaws.com
2020-04-14T16:35:42+02:00 [Debug] Telemetry initiated
2020-04-14T16:35:42+02:00 [Info] HTTP Proxy server using X-Ray Endpoint : xray.eu-west-1.amazonaws.com
2020-04-14T16:35:42+02:00 [Debug] Using Endpoint: https://xray.eu-west-1.amazonaws.com
2020-04-14T16:35:42+02:00 [Debug] Batch size: 50
Question 2
Looking at the log file of the daemon, the line Trace segment: received: 0, truncated: 0, processed: 0 seems to indicate that it never received trace data? Why not, what are we missing? I'm suspecting that we are not instrumenting the application properly, but not sure.
For anyone that's interested. Herewith the solution to the problem (actually multiple problems)
Step 1 - Startup File Code
public Startup(IConfiguration configuration)
{
AWSXRayRecorder.InitializeInstance(configuration: Configuration); // Inititalizing Configuration object with X-Ray recorder
AWSSDKHandler.RegisterXRayForAllServices(); // All AWS SDK requests will be traced
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
//Make sure this is after env.IsDevelopment()
app.UseXRay("WeatherForecast");
.....
}
Make sure appsettings.json and sampling-rules.json mimic's the Sample App
Once the code runs, the log file of the app would look something like this.
I felt that the AWS.SDK package generates a lot of noise even when using the Sample App, which I omitted here. That said, DEBUG logs tend to be that way.
2020-04-15 11:34:04,262 [5] INFO Amazon.XRay.Recorder.Core.Internal.Utils.DaemonConfig - The given daemonAddress () is invalid, using default daemon UDP and TCP address 127.0.0.1:2000.
2020-04-15 11:34:04,368 [5] INFO Amazon.Runtime.Internal.RuntimePipelineCustomizerRegistry - Applying runtime pipeline customization X-Ray Registration Customization
2020-04-15 11:34:04,389 [5] INFO Amazon.XRay.Recorder.Core.Sampling.DefaultSamplingStrategy - No effective centralized sampling rule match. Fallback to local rules.
2020-04-15 11:34:04,390 [5] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Can't match a rule for host = localhost, path = /index.html, method = GET
2020-04-15 11:34:04,573 [5] DEBUG **Amazon.XRay.Recorder.Core.Internal.Emitters.UdpSegmentEmitter - UDP Segment emitter endpoint: 127.0.0.1:2000.**
Ultimately, you are looking for the last line Amazon.XRay.Recorder.Core.Internal.Emitters.UdpSegmentEmitter - UDP Segment emitter endpoint: 127.0.0.1:2000.
Step 2 - Configure the Daemon
If you install the Daemon as a Windows Service locally. I ran into a couple of additional problems.
A - It doesn't put everything in one place and it doesn't look at the configuration file that it extracted. Unless you put the cfg.yaml file in System32.
B - The service probably won't have access to the .aws folder where the credentials are stored.
I fixed problems A, by doing the following (i'm sure you could achieve the same goal in multiple ways)
Since i'm not a powershell expert, I just moved the extracted content to a folder of my choosing and modified the service path in the registry to point to that folder as well as added the appropriate flags so that it logs to the location you expect as well as use the cfg.yaml file you expect.
regedit -> Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AWSXRayDaemon
Set image path with flags -f for log file and -c for config file
C:\YOUR USER\.aws\aws-xray-daemon\xray.exe -f C:\YOUR USER\.aws\aws-xray-daemon\xray-daemon.log -c C:\YOUR USER\.aws\aws-xray-daemon\cfg.yaml
The last problem was the Daemon not having the appropriate permissions to access the credentials file inside the .aws folder.
Log file will look something like this
2020-04-15T09:35:54+02:00 [Debug] processor: sending partial batch
2020-04-15T09:35:54+02:00 [Debug] processor: segment batch size: 1. capacity: 50
2020-04-15T09:35:54+02:00 [Error] Unable to sign request: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2020-04-15T09:35:54+02:00 [Error] Sending segment batch failed with: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
The NoCredentialProviders line indicates a permission issue.
I then modified the service to run as an administrator, which solved problem B.
daemon.log
2020-04-15T09:41:31+02:00 [Debug] Received request on HTTP Proxy server : /GetSamplingRules
2020-04-15T09:41:32+02:00 [Debug] processor: sending partial batch
2020-04-15T09:41:32+02:00 [Debug] processor: segment batch size: 1. capacity: 50
2020-04-15T09:41:33+02:00 [Debug] Received request on HTTP Proxy server : /GetSamplingRules
2020-04-15T09:41:33+02:00 [Info] Successfully sent batch of 1 segments (0.871 seconds)
2020-04-15T09:41:34+02:00 [Debug] processor: sending partial batch
2020-04-15T09:41:34+02:00 [Debug] processor: segment batch size: 1. capacity: 50
2020-04-15T09:41:34+02:00 [Info] Successfully sent batch of 1 segments (0.197 seconds)
You are looking for the line successfully sent batch as confirmation that the Daemon sent the trace to the X-Ray service.
Hope this helps someone.
Cheers
By looking at the daemon logs looks like trace data is not sent to the service. I think instrumentation could be the issue. I would recommend you to read this documentation for instrumentation (https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-dotnet.html). You might have to instrument outgoing HTTP calls, incoming http request and outgoing AWS SDK calls in order to see trace view of your application. Hope this helps!

Connection from Linux OMI (omicli) to Windows WMI fails with DMTF related error

I am implementing OMI client on CentOs in C++ to communicate with windows WMI
I have installed OMI on Linux CentOS 7 and trying to connect to Windows 7
using the sample utility provided by OMI.
Reference: https://github.com/Microsoft/omi
Also configured WINRM on Windows to receive basic authentication calls.
I am not able to get the sample working. Getting the following error:
root#LinuxMachine bin]# ./omicli --auth Basic --hostname WinMachine.TEST.COM -u admin -p adminpassaword ei root/cimv2 Win32_Environment --port 5985
./omicli: result: MI_RESULT_FAILED
./omicli: result: ERROR_INTERNAL_ERROR: The WS-Management service cannot process the request. A DMTF resource URI was used to access a non-DMTF class. Try again using a non-DMTF resource URI.
Below is the WINRM configuration for the destination machine for reference
C:\Windows\system32>winrm get winrm/config
Config
MaxEnvelopeSizekb = 150
MaxTimeoutms = 60000
MaxBatchItems = 32000
MaxProviderRequests = 4294967295
Client
NetworkDelayms = 5000
URLPrefix = wsman
AllowUnencrypted = true [Source="GPO"]
Auth
Basic = true [Source="GPO"]
Digest = true [Source="GPO"]
Kerberos = true [Source="GPO"]
Negotiate = true [Source="GPO"]
Certificate = true
CredSSP = true [Source="GPO"]
DefaultPorts
HTTP = 5985
HTTPS = 5986
TrustedHosts
Service
RootSDDL = O:NSG:BAD:P(A;;AG;;;BA)S:P(AU;FA;GA;;;WD)(AU;SA;GWGX;;;WD)
MaxConcurrentOperations = 4294967295
MaxConcurrentOperationsPerUser = 15
EnumerationTimeoutms = 60000
MaxConnections = 25
MaxPacketRetrievalTimeSeconds = 120
AllowUnencrypted = true
Auth
Basic = true [Source="GPO"]
Kerberos = true [Source="GPO"]
Negotiate = true [Source="GPO"]
Certificate = false
CredSSP = true [Source="GPO"]
CbtHardeningLevel = Relaxed
DefaultPorts
HTTP = 5985
HTTPS = 5986
IPv4Filter = *
IPv6Filter = *
EnableCompatibilityHttpListener = false
EnableCompatibilityHttpsListener = false
CertificateThumbprint
Winrs
AllowRemoteShellAccess = true
IdleTimeout = 180000
MaxConcurrentUsers = 5
MaxShellRunTime = 2147483647
MaxProcessesPerShell = 15
MaxMemoryPerShellMB = 150
MaxShellsPerUser = 5
Am I missing anything obvious? Any help with getting the sample working is much appreciated.
I had encountered similar issue. I have resolved this by upgrading power-shell version on my server.
Windows 7 by-default shows uses powershell version 2.0.
PS C:\> test-wsman <clientName>
wsmid : http://schemas.dmtf.org/wbem/wsman/identity/1/wsmanidentity.xsd
ProtocolVersion : http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd
ProductVendor : Microsoft Corporation
ProductVersion : OS: 0.0.0 SP: 0.0 Stack: 2.0
By-default CIM session uses WSMAN protocol, specifically newer version of protocol.
This won't work for computers running powershell version 2.0 or no powershell at all.
Upgrade your to resolve this issue.
Refer URL https://mcpmag.com/articles/2013/05/07/remote-to-second-powershell.aspx for more details.

reactivemongo play 2.3 0.11.7 to 0.11.11 upgrade raises ExceptionInInitializerError

[I am re-editing this question to reflect on my last tests]
I am trying to upgrade my akka / play 2.3 application from
"org.reactivemongo" %% "play2-reactivemongo" % "0.11.7.play23"
to
"org.reactivemongo" %% "play2-reactivemongo" % "0.11.11-play23"
Compilation goes fine but at run-time I get the following error:
[ERROR] -- NettyTransport(akka://reactivemongo)
failed to bind to /127.0.0.1:2552, shutting down Netty transport
...
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:2552
The Akka part of application.conf reads as follows:
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
actor {
provider = "akka.remote.RemoteActorRefProvider"
mailbox {
requirements {
"akka.dispatch.BoundedMessageQueueSemantics" = bounded-mailbox
}
}
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2552
}
}
}
The exception is raised when trying to instantiate the reactivemongo driver
val driver = new reactivemongo.api.MongoDriver()
This suggests that the mongodriver is using Akka under the hood and is binding to the same address that my main application. And indeed, if I edit my application.conf and change the akka.remote.netty.tcp.port from 2552 to 2553, I get the following exception:
[ERROR] -- NettyTransport(akka://reactivemongo)
failed to bind to /127.0.0.1:2553, shutting down Netty transport
In the previous versions of reactivemongo, by default, instantiating the driver was starting a new actor system so maybe version 0.11.11 tries to reuse the existing system?
I have tried to modify the akka port used by the driver as follows:
val customConf = ConfigFactory.parseString("""
akka {
remote {
netty.tcp.port = 4711
}
}
""")
val typesafeConfig: com.typesafe.config.Config = ConfigFactory.load(customConf)
val driver = new reactivemongo.api.MongoDriver(Some(typesafeConfig))
But this does not work, the new port is not taken into account and I keep getting the same error:
[ERROR] -- NettyTransport(akka://reactivemongo)
failed to bind to /127.0.0.1:2552, shutting down Netty transport
Actually, ReactiveMongo loads its configuration from the key 'mongo-async-driver'.
So, Adding the following permits to configure ReactiveMongo underlying akka system:
val customConf = ConfigFactory.parseString("""
mongo-async-driver {
akka {
loglevel = WARNING
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 4711
}
}
}
}
""")

ERROR 503: Service not available at persist HDFS

I have an Orion instance with Cygnus at filab; subcription and notify run fine but I can not persist data to cosmos.lab.fi-ware.org.
Cygnus returns this error:
[ERROR - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionSink.process(OrionSink.java:139)] Persistence error (The talky/talkykar/room6_room directory could not be created in HDFS. HttpFS response: 503 Service unavailable)
This is my agent_a.conf file:
cygnusagent.sources = http-source
cygnusagent.sinks = hdfs-sink
cygnusagent.channels = hdfs-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = hdfs-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = talky
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = talkykar
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts de
# Timestamp interceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# Destination extractor interceptor, do not change
cygnusagent.sources.http-source.interceptors.de.type = es.tid.fiware.fiwareconnectors.cygnus.interceptors.DestinationExtractor$Builder
# Matching table for the destination extractor interceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.de.matching_table = /usr/cygnus/conf/matching_table.conf
# ============================================
# OrionHDFSSink configuration
# channel name from where to read notification events
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
# sink class, must not be changed
cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink
# Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints
# If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory
cygnusagent.sinks.hdfs-sink.cosmos_host = http://cosmos.lab.fi-ware.org
# port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty
cygnusagent.sinks.hdfs-sink.cosmos_port = 14000
# default username allowed to write in HDFS
cygnusagent.sinks.hdfs-sink.cosmos_default_username = myuser
# default password for the default username
cygnusagent.sinks.hdfs-sink.cosmos_default_password = mypass
# HDFS backend type (webhdfs, httpfs or infinity)
cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.hdfs-sink.attr_persistence = row
# Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = http://cosmos.lab.fi-ware.org
# Hive port for Hive external table provisioning
cygnusagent.sinks.hdfs-sink.hive_port = 10000
# Kerberos-based authentication enabling
cygnusagent.sinks.hdfs-sink.krb5_auth = false
# Kerberos username
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
# Kerberos password
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
# Kerberos login file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
# Kerberos configuration file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf
#=============================================
And this is the Cygnus log:
2015-05-04 09:05:10,434 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink.persist(OrionHDFSSink.java:315)] [hdfs-sink] Persisting data at OrionHDFSSink. HDFS file (talky/talkykar/room6_room/room6_room.txt), Data ({"recvTimeTs":"1430723069","recvTime":"2015-05-04T09:04:29.819","entityId":"Room6","entityType":"Room","attrName":"temperature","attrType":"float","attrValue":"26.5","attrMd":[]})
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:255)] HDFS request: PUT http://http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/mped.mlg/talky/talkykar/room6_room?op=mkdirs&user.name=mped.mlg HTTP/1.1
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:186)] Connection request: [route: {}->http://http][total kept alive: 0; route allocated: 0 of 100; total allocated: 0 of 500]
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:220)] Connection leased: [id: 21][route: {}->http://http][total kept alive: 0; route allocated: 1 of 100; total allocated: 1 of 500]
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.close(DefaultClientConnection.java:169)] Connection org.apache.http.impl.conn.DefaultClientConnection#5700187d closed
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.shutdown(DefaultClientConnection.java:154)] Connection org.apache.http.impl.conn.DefaultClientConnection#5700187d shut down
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.releaseConnection(PoolingClientConnectionManager.java:272)] Connection [id: 21][route: {}->http://http] can be kept alive for 9223372036854775807 MILLISECONDS
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.close(DefaultClientConnection.java:169)] Connection org.apache.http.impl.conn.DefaultClientConnection#5700187d closed
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.releaseConnection(PoolingClientConnectionManager.java:278)] Connection released: [id: 21][route: {}->http://http][total kept alive: 0; route allocated: 0 of 100; total allocated: 0 of 500]
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:191)] The used HDFS endpoint is not active, trying another one (host=http://cosmos.lab.fi-ware.org)
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionSink.process(OrionSink.java:139)] Persistence error (The talky/talkykar/room6_room directory could not be created in HDFS. HttpFS response: 503 Service unavailable)
Thanks.
If you take a look to this log:
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:255)] HDFS request: PUT http://http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/mped.mlg/talky/talkykar/room6_room?op=mkdirs&user.name=mped.mlg HTTP/1.1
You will se your are trying to create a HDFS directory by using a http://http://cosmos.lab... URL (please, notice the double http://http://).
This is becasuse you have configured:
cygnusagent.sinks.hdfs-sink.hive_host = http://cosmos.lab.fi-ware.org
Instead of:
cygnusagent.sinks.hdfs-sink.hive_host = cosmos.lab.fi-ware.org
Such a parameter asks for a host, not a URL.
Being said that, in future releases we will allow for both encodings.