I am getting started with Application development on Matic. And I am following the instruction as provided on the docs https://docs.matic.network/docs/develop/getting-started
But I faced problem while using truffle. After I run the command
truffle migrate --network matic
The error as follow:
Compiling your contracts...
===========================
> Everything is up to date, there is nothing to compile.
Starting migrations...
======================
> Network name: 'matic'
> Network id: 80001
> Block gas limit: 20000000 (0x1312d00)
1_initial_migration.js
======================
Deploying 'Migrations'
----------------------
Error: *** Deployment Failed ***
"Migrations" -- insufficient funds for gas * price + value.
at /usr/local/lib/node_modules/truffle/build/webpack:/packages/deployer/src/deployment.js:365:1
at process._tickCallback (internal/process/next_tick.js:68:7)
Truffle v5.1.55 (core: 5.1.55)
Node v10.19.0
The configuration file of truffle as follow:
const HDWalletProvider = require('truffle-hdwallet-provider');
const fs = require('fs');
const mnemonic = fs.readFileSync(".secret").toString().trim();
module.exports = {
networks: {
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "*", // Any network (default: none)
},
matic: {
provider: () => new HDWalletProvider(mnemonic, `https://rpc-mumbai.matic.today`),
network_id: 80001,
confirmations: 2,
timeoutBlocks: 200,
skipDryRun: true
},
},
// Set default mocha options here, use special reporters etc.
mocha: {
// timeout: 100000
},
// Configure your compilers
compilers: {
solc: {
}
}
}
It worked fine for the develop network using
truffle develop
Can someone tell me how to overcome error while using Matic Test Network?
Make sure you have enough MATIC for the transaction, you can get some from here and transfer them to the account[0]. I had issues deploying with truffle as all my tokens were at account[1] rather than account[0]
in my case it was ETH, ensuring i had tokens in the first account did the trick
We are trying to get X-Ray trace data from a local dotnet core 3.1 app sending trace data to a local X-Ray Daemon. As a start, we've created a generic web api and added swagger (just to make testing easier).
Startup.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.HttpsPolicy;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using Microsoft.OpenApi.Models;
using Amazon.XRay.Recorder.Core;
using log4net;
using log4net.Config;
using System.Reflection;
using System.IO;
using Amazon;
using System.Net;
using Amazon.XRay.Recorder.Core.Internal.Utils;
using Amazon.XRay.Recorder.Core.Sampling.Local;
namespace AWS_XRay
{
public class Startup
{
public static ILog log;
static Startup() // create log4j instance
{
var logRepository = LogManager.GetRepository(Assembly.GetEntryAssembly());
XmlConfigurator.Configure(logRepository, new FileInfo("log4net.config"));
log = LogManager.GetLogger(typeof(Startup));
AWSXRayRecorder.RegisterLogger(LoggingOptions.Log4Net);
}
public Startup(IConfiguration configuration)
{
Configuration = configuration;
Environment.SetEnvironmentVariable("AWS_XRAY_DAEMON_ADDRESS", "127.0.0.1:2000");
Environment.SetEnvironmentVariable("AWS_XRAY_CONTEXT_MISSING", "LOG_ERROR");
var recorder = new AWSXRayRecorderBuilder().WithSamplingStrategy(newLocalizedSamplingStrategy("sampling-rules.json")).Build();
AWSXRayRecorder.InitializeInstance(configuration, recorder);
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
// Register the Swagger generator, defining 1 or more Swagger documents
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1" });
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseXRay("WeatherForecast");
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1");
c.RoutePrefix = string.Empty;
});
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
}
}
Then we decorated the controller with the relevant or what we think is relevant
WeatherController
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Amazon.XRay.Recorder.Core;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
namespace AWS_XRay.Controllers
{
[ApiController]
[Route("[controller]")]
public class WeatherForecastController : ControllerBase
{
private static readonly string[] Summaries = new[]
{
"Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};
private readonly ILogger<WeatherForecastController> _logger;
public WeatherForecastController(ILogger<WeatherForecastController> logger)
{
_logger = logger;
}
[HttpGet]
[Route("GetWeather")]
public async Task<IActionResult> WeatherForecast()
{
AWSXRayRecorder.Instance.BeginSegment("weatherget"); // generates `TraceId` for you
try
{
var rng = new Random();
var result = Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
Date = DateTime.Now.AddDays(index),
TemperatureC = rng.Next(-20, 55),
Summary = Summaries[rng.Next(Summaries.Length)]
})
.ToArray();
// can create custom subsegments
return Ok(result);
}
catch (Exception e)
{
AWSXRayRecorder.Instance.AddException(e);
return StatusCode(500, e);
}
finally
{
AWSXRayRecorder.Instance.EndSegment();
}
}
}
}
When running the application, looking at the logs. This is what we see...
*sdk-log.txt"
2020-04-14 16:04:21,740 [1] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Initializing with custom sampling configuration : sampling-rules.json
2020-04-14 16:04:22,035 [1] DEBUG Amazon.XRay.Recorder.Core.Internal.Utils.IPEndPointExtension - Determined that 127.0.0.1:2000 is an IP.
2020-04-14 16:04:22,039 [1] INFO Amazon.XRay.Recorder.Core.Internal.Utils.IPEndPointExtension - Using custom daemon address for UDP and TCP: 127.0.0.1:2000
2020-04-14 16:04:22,042 [1] DEBUG Amazon.XRay.Recorder.Core.Strategies.DefaultExceptionSerializationStrategy - Setting max stack frame size : 50
2020-04-14 16:04:22,073 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - Context missing mode : RUNTIME_ERROR
2020-04-14 16:04:22,073 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - AWS_XRAY_CONTEXT_MISSING environment variable is set to LOG_ERROR. Override local value.
2020-04-14 16:04:22,078 [1] DEBUG Amazon.XRay.Recorder.Core.Internal.Utils.IPEndPointExtension - Determined that 127.0.0.1:2000 is an IP.
2020-04-14 16:04:22,078 [1] INFO Amazon.XRay.Recorder.Core.Internal.Utils.IPEndPointExtension - Using custom daemon address for UDP and TCP: 127.0.0.1:2000
2020-04-14 16:04:22,078 [1] DEBUG Amazon.XRay.Recorder.Core.Strategies.DefaultExceptionSerializationStrategy - Setting max stack frame size : 50
2020-04-14 16:04:22,078 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - Context missing mode : RUNTIME_ERROR
2020-04-14 16:04:22,078 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - AWS_XRAY_CONTEXT_MISSING environment variable is set to LOG_ERROR. Override local value.
2020-04-14 16:04:22,078 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorder - Using custom X-Ray recorder.
2020-04-14 16:04:22,079 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - Context missing mode : RUNTIME_ERROR
2020-04-14 16:04:22,080 [1] DEBUG Amazon.XRay.Recorder.Core.AWSXRayRecorderImpl - AWS_XRAY_CONTEXT_MISSING environment variable is set to LOG_ERROR. Override local value.
2020-04-14 16:04:22,899 [4] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:22,911 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /index.html, method = GET
2020-04-14 16:04:23,393 [4] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:23,394 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /swagger/v1/swagger.json, method = GET
2020-04-14 16:04:27,497 [4] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:27,499 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /WeatherForecast/GetWeather, method = GET
2020-04-14 16:04:27,602 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = , path = , method =
2020-04-14 16:04:29,740 [4] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:29,741 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /WeatherForecast/GetWeather, method = GET
2020-04-14 16:04:29,745 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = , path = , method =
2020-04-14 16:04:30,149 [13] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:30,150 [13] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /WeatherForecast/GetWeather, method = GET
2020-04-14 16:04:30,152 [13] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = , path = , method =
2020-04-14 16:04:30,346 [4] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:30,346 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /WeatherForecast/GetWeather, method = GET
2020-04-14 16:04:30,349 [4] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = , path = , method =
2020-04-14 16:04:30,517 [13] DEBUG Amazon.XRay.Recorder.Handlers.AspNetCore.Internal.AWSXRayMiddleware - Trace header doesn't exist or not valid : (). Injecting a new one.
2020-04-14 16:04:30,518 [13] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = localhost, path = /WeatherForecast/GetWeather, method = GET
2020-04-14 16:04:30,529 [13] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Found a matching rule : (hostToMatch=*, httpMethodToMatch=Get, urlPathToMatch=*, fixedTarget=0, rate=0, description=Weather) for host = , path = , method =
2020-04-14 16:30:02,682 [1] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Initializing with custom sampling configuration : sampling-rules.json
Question 1
Based on the output in the config file, is there any trace data being sent to the daemon? We can't see any errors from the output, log level is set to DEBUG. Can't definitively say it is sending trace data to although no errors.
Daemon Config & Logs
cfg.yaml
# Maximum buffer size in MB (minimum 3). Choose 0 to use 1% of host memory.
TotalBufferSizeMB: 0
# Maximum number of concurrent calls to AWS X-Ray to upload segment documents.
Concurrency: 8
# Send segments to AWS X-Ray service in a specific region
Region: "eu-west-1"
# Change the X-Ray service endpoint to which the daemon sends segment documents.
Endpoint: "xray.eu-west-1.amazonaws.com"
Socket:
# Change the address and port on which the daemon listens for UDP packets containing segment documents.
UDPAddress: "127.0.0.1:2000"
# Change the address and port on which the daemon listens for HTTP requests to proxy to AWS X-Ray.
TCPAddress: "127.0.0.1:2000"
Logging:
LogRotation: true
# Change the log level, from most verbose to least: dev, debug, info, warn, error, prod (default).
LogLevel: "dev"
# Output logs to the specified file path.
LogPath: "xray.log"
# Turn on local mode to skip EC2 instance metadata check.
LocalMode: true
# Amazon Resource Name (ARN) of the AWS resource running the daemon.
ResourceARN: ""
# Assume an IAM role to upload segments to a different account.
RoleARN: "************************"
# Disable TLS certificate verification.
NoVerifySSL: false
# Upload segments to AWS X-Ray through a proxy.
ProxyAddress: ""
# Daemon configuration file format version.
Version: 2
Looking at the log file
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Skipped telemetry data as no segments found
2020-04-14T16:35:40+02:00 [Debug] telemetry: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] Segment batch: done!
2020-04-14T16:35:40+02:00 [Debug] processor: done!
2020-04-14T16:35:40+02:00 [Debug] Trace segment: received: 0, truncated: 0, processed: 0
2020-04-14T16:35:40+02:00 [Debug] Shutdown finished. Current epoch in nanoseconds: 1586874940496183800
2020-04-14T16:35:42+02:00 [Info] Initializing AWS X-Ray daemon 3.2.0
2020-04-14T16:35:42+02:00 [Debug] Listening on UDP 127.0.0.1:2000
2020-04-14T16:35:42+02:00 [Info] Using buffer memory limit of 80 MB
2020-04-14T16:35:42+02:00 [Info] 1280 segment buffers allocated
2020-04-14T16:35:42+02:00 [Debug] Using Endpoint read from Config file: xray.eu-west-1.amazonaws.com
2020-04-14T16:35:42+02:00 [Debug] Using proxy address:
2020-04-14T16:35:42+02:00 [Debug] Fetch region eu-west-1 from commandline/config file
2020-04-14T16:35:42+02:00 [Info] Using region: eu-west-1
2020-04-14T16:35:42+02:00 [Debug] ARN of the AWS resource running the daemon:
2020-04-14T16:35:42+02:00 [Debug] No Metadata set for telemetry records
2020-04-14T16:35:42+02:00 [Debug] Using Endpoint: https://xray.eu-west-1.amazonaws.com
2020-04-14T16:35:42+02:00 [Debug] Telemetry initiated
2020-04-14T16:35:42+02:00 [Info] HTTP Proxy server using X-Ray Endpoint : xray.eu-west-1.amazonaws.com
2020-04-14T16:35:42+02:00 [Debug] Using Endpoint: https://xray.eu-west-1.amazonaws.com
2020-04-14T16:35:42+02:00 [Debug] Batch size: 50
Question 2
Looking at the log file of the daemon, the line Trace segment: received: 0, truncated: 0, processed: 0 seems to indicate that it never received trace data? Why not, what are we missing? I'm suspecting that we are not instrumenting the application properly, but not sure.
For anyone that's interested. Herewith the solution to the problem (actually multiple problems)
Step 1 - Startup File Code
public Startup(IConfiguration configuration)
{
AWSXRayRecorder.InitializeInstance(configuration: Configuration); // Inititalizing Configuration object with X-Ray recorder
AWSSDKHandler.RegisterXRayForAllServices(); // All AWS SDK requests will be traced
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
//Make sure this is after env.IsDevelopment()
app.UseXRay("WeatherForecast");
.....
}
Make sure appsettings.json and sampling-rules.json mimic's the Sample App
Once the code runs, the log file of the app would look something like this.
I felt that the AWS.SDK package generates a lot of noise even when using the Sample App, which I omitted here. That said, DEBUG logs tend to be that way.
2020-04-15 11:34:04,262 [5] INFO Amazon.XRay.Recorder.Core.Internal.Utils.DaemonConfig - The given daemonAddress () is invalid, using default daemon UDP and TCP address 127.0.0.1:2000.
2020-04-15 11:34:04,368 [5] INFO Amazon.Runtime.Internal.RuntimePipelineCustomizerRegistry - Applying runtime pipeline customization X-Ray Registration Customization
2020-04-15 11:34:04,389 [5] INFO Amazon.XRay.Recorder.Core.Sampling.DefaultSamplingStrategy - No effective centralized sampling rule match. Fallback to local rules.
2020-04-15 11:34:04,390 [5] DEBUG Amazon.XRay.Recorder.Core.Sampling.Local.LocalizedSamplingStrategy - Can't match a rule for host = localhost, path = /index.html, method = GET
2020-04-15 11:34:04,573 [5] DEBUG **Amazon.XRay.Recorder.Core.Internal.Emitters.UdpSegmentEmitter - UDP Segment emitter endpoint: 127.0.0.1:2000.**
Ultimately, you are looking for the last line Amazon.XRay.Recorder.Core.Internal.Emitters.UdpSegmentEmitter - UDP Segment emitter endpoint: 127.0.0.1:2000.
Step 2 - Configure the Daemon
If you install the Daemon as a Windows Service locally. I ran into a couple of additional problems.
A - It doesn't put everything in one place and it doesn't look at the configuration file that it extracted. Unless you put the cfg.yaml file in System32.
B - The service probably won't have access to the .aws folder where the credentials are stored.
I fixed problems A, by doing the following (i'm sure you could achieve the same goal in multiple ways)
Since i'm not a powershell expert, I just moved the extracted content to a folder of my choosing and modified the service path in the registry to point to that folder as well as added the appropriate flags so that it logs to the location you expect as well as use the cfg.yaml file you expect.
regedit -> Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AWSXRayDaemon
Set image path with flags -f for log file and -c for config file
C:\YOUR USER\.aws\aws-xray-daemon\xray.exe -f C:\YOUR USER\.aws\aws-xray-daemon\xray-daemon.log -c C:\YOUR USER\.aws\aws-xray-daemon\cfg.yaml
The last problem was the Daemon not having the appropriate permissions to access the credentials file inside the .aws folder.
Log file will look something like this
2020-04-15T09:35:54+02:00 [Debug] processor: sending partial batch
2020-04-15T09:35:54+02:00 [Debug] processor: segment batch size: 1. capacity: 50
2020-04-15T09:35:54+02:00 [Error] Unable to sign request: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2020-04-15T09:35:54+02:00 [Error] Sending segment batch failed with: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
The NoCredentialProviders line indicates a permission issue.
I then modified the service to run as an administrator, which solved problem B.
daemon.log
2020-04-15T09:41:31+02:00 [Debug] Received request on HTTP Proxy server : /GetSamplingRules
2020-04-15T09:41:32+02:00 [Debug] processor: sending partial batch
2020-04-15T09:41:32+02:00 [Debug] processor: segment batch size: 1. capacity: 50
2020-04-15T09:41:33+02:00 [Debug] Received request on HTTP Proxy server : /GetSamplingRules
2020-04-15T09:41:33+02:00 [Info] Successfully sent batch of 1 segments (0.871 seconds)
2020-04-15T09:41:34+02:00 [Debug] processor: sending partial batch
2020-04-15T09:41:34+02:00 [Debug] processor: segment batch size: 1. capacity: 50
2020-04-15T09:41:34+02:00 [Info] Successfully sent batch of 1 segments (0.197 seconds)
You are looking for the line successfully sent batch as confirmation that the Daemon sent the trace to the X-Ray service.
Hope this helps someone.
Cheers
By looking at the daemon logs looks like trace data is not sent to the service. I think instrumentation could be the issue. I would recommend you to read this documentation for instrumentation (https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-dotnet.html). You might have to instrument outgoing HTTP calls, incoming http request and outgoing AWS SDK calls in order to see trace view of your application. Hope this helps!
[I am re-editing this question to reflect on my last tests]
I am trying to upgrade my akka / play 2.3 application from
"org.reactivemongo" %% "play2-reactivemongo" % "0.11.7.play23"
to
"org.reactivemongo" %% "play2-reactivemongo" % "0.11.11-play23"
Compilation goes fine but at run-time I get the following error:
[ERROR] -- NettyTransport(akka://reactivemongo)
failed to bind to /127.0.0.1:2552, shutting down Netty transport
...
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:2552
The Akka part of application.conf reads as follows:
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
actor {
provider = "akka.remote.RemoteActorRefProvider"
mailbox {
requirements {
"akka.dispatch.BoundedMessageQueueSemantics" = bounded-mailbox
}
}
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2552
}
}
}
The exception is raised when trying to instantiate the reactivemongo driver
val driver = new reactivemongo.api.MongoDriver()
This suggests that the mongodriver is using Akka under the hood and is binding to the same address that my main application. And indeed, if I edit my application.conf and change the akka.remote.netty.tcp.port from 2552 to 2553, I get the following exception:
[ERROR] -- NettyTransport(akka://reactivemongo)
failed to bind to /127.0.0.1:2553, shutting down Netty transport
In the previous versions of reactivemongo, by default, instantiating the driver was starting a new actor system so maybe version 0.11.11 tries to reuse the existing system?
I have tried to modify the akka port used by the driver as follows:
val customConf = ConfigFactory.parseString("""
akka {
remote {
netty.tcp.port = 4711
}
}
""")
val typesafeConfig: com.typesafe.config.Config = ConfigFactory.load(customConf)
val driver = new reactivemongo.api.MongoDriver(Some(typesafeConfig))
But this does not work, the new port is not taken into account and I keep getting the same error:
[ERROR] -- NettyTransport(akka://reactivemongo)
failed to bind to /127.0.0.1:2552, shutting down Netty transport
Actually, ReactiveMongo loads its configuration from the key 'mongo-async-driver'.
So, Adding the following permits to configure ReactiveMongo underlying akka system:
val customConf = ConfigFactory.parseString("""
mongo-async-driver {
akka {
loglevel = WARNING
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 4711
}
}
}
}
""")
I have an Orion instance with Cygnus at filab; subcription and notify run fine but I can not persist data to cosmos.lab.fi-ware.org.
Cygnus returns this error:
[ERROR - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionSink.process(OrionSink.java:139)] Persistence error (The talky/talkykar/room6_room directory could not be created in HDFS. HttpFS response: 503 Service unavailable)
This is my agent_a.conf file:
cygnusagent.sources = http-source
cygnusagent.sinks = hdfs-sink
cygnusagent.channels = hdfs-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = hdfs-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = talky
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = talkykar
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts de
# Timestamp interceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# Destination extractor interceptor, do not change
cygnusagent.sources.http-source.interceptors.de.type = es.tid.fiware.fiwareconnectors.cygnus.interceptors.DestinationExtractor$Builder
# Matching table for the destination extractor interceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.de.matching_table = /usr/cygnus/conf/matching_table.conf
# ============================================
# OrionHDFSSink configuration
# channel name from where to read notification events
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
# sink class, must not be changed
cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink
# Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints
# If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory
cygnusagent.sinks.hdfs-sink.cosmos_host = http://cosmos.lab.fi-ware.org
# port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty
cygnusagent.sinks.hdfs-sink.cosmos_port = 14000
# default username allowed to write in HDFS
cygnusagent.sinks.hdfs-sink.cosmos_default_username = myuser
# default password for the default username
cygnusagent.sinks.hdfs-sink.cosmos_default_password = mypass
# HDFS backend type (webhdfs, httpfs or infinity)
cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.hdfs-sink.attr_persistence = row
# Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = http://cosmos.lab.fi-ware.org
# Hive port for Hive external table provisioning
cygnusagent.sinks.hdfs-sink.hive_port = 10000
# Kerberos-based authentication enabling
cygnusagent.sinks.hdfs-sink.krb5_auth = false
# Kerberos username
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
# Kerberos password
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
# Kerberos login file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
# Kerberos configuration file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf
#=============================================
And this is the Cygnus log:
2015-05-04 09:05:10,434 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink.persist(OrionHDFSSink.java:315)] [hdfs-sink] Persisting data at OrionHDFSSink. HDFS file (talky/talkykar/room6_room/room6_room.txt), Data ({"recvTimeTs":"1430723069","recvTime":"2015-05-04T09:04:29.819","entityId":"Room6","entityType":"Room","attrName":"temperature","attrType":"float","attrValue":"26.5","attrMd":[]})
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:255)] HDFS request: PUT http://http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/mped.mlg/talky/talkykar/room6_room?op=mkdirs&user.name=mped.mlg HTTP/1.1
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:186)] Connection request: [route: {}->http://http][total kept alive: 0; route allocated: 0 of 100; total allocated: 0 of 500]
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:220)] Connection leased: [id: 21][route: {}->http://http][total kept alive: 0; route allocated: 1 of 100; total allocated: 1 of 500]
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.close(DefaultClientConnection.java:169)] Connection org.apache.http.impl.conn.DefaultClientConnection#5700187d closed
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.shutdown(DefaultClientConnection.java:154)] Connection org.apache.http.impl.conn.DefaultClientConnection#5700187d shut down
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.releaseConnection(PoolingClientConnectionManager.java:272)] Connection [id: 21][route: {}->http://http] can be kept alive for 9223372036854775807 MILLISECONDS
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.close(DefaultClientConnection.java:169)] Connection org.apache.http.impl.conn.DefaultClientConnection#5700187d closed
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.releaseConnection(PoolingClientConnectionManager.java:278)] Connection released: [id: 21][route: {}->http://http][total kept alive: 0; route allocated: 0 of 100; total allocated: 0 of 500]
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:191)] The used HDFS endpoint is not active, trying another one (host=http://cosmos.lab.fi-ware.org)
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionSink.process(OrionSink.java:139)] Persistence error (The talky/talkykar/room6_room directory could not be created in HDFS. HttpFS response: 503 Service unavailable)
Thanks.
If you take a look to this log:
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:255)] HDFS request: PUT http://http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/mped.mlg/talky/talkykar/room6_room?op=mkdirs&user.name=mped.mlg HTTP/1.1
You will se your are trying to create a HDFS directory by using a http://http://cosmos.lab... URL (please, notice the double http://http://).
This is becasuse you have configured:
cygnusagent.sinks.hdfs-sink.hive_host = http://cosmos.lab.fi-ware.org
Instead of:
cygnusagent.sinks.hdfs-sink.hive_host = cosmos.lab.fi-ware.org
Such a parameter asks for a host, not a URL.
Being said that, in future releases we will allow for both encodings.