I have a network setup of 100 hosts, which have to enter the network one by one, until all have joined for a 24 hours simulation time (one option is to have one joining every 864 seconds).
I am interested in counting the multicast messages exchanged between the machines, through the Neighbour Discovery Protocol. Is it possible to do this, without changing anything in the source file of the IPv6NeighbourDiscovery.cc?
This is my NED File:
package inet.examples.wireless.wiredandwirelesshostswithap;
import inet.networklayer.configurator.ipv6.FlatNetworkConfigurator6;
import inet.networklayer.icmpv6.IPv6NeighbourDiscovery;
import inet.node.ethernet.Eth100M;
import inet.node.ipv6.Router6;
import inet.node.xmipv6.WirelessHost6;
import inet.node.wireless.AccessPoint;
import inet.physicallayer.ieee80211.packetlevel.Ieee80211ScalarRadioMedium;
network WiredAndWirelessHostsWithAP
{
parameters:
int n;
#display("bgb=503,434");
submodules:
wirelessHost[n]: WirelessHost6 {
#display("p=58,88");
}
router6: Router6 {
#display("p=412,88");
}
accessPoint: AccessPoint {
#display("p=323,87");
}
configurator: FlatNetworkConfigurator6 {
#display("p=323,165");
}
radioMedium: Ieee80211ScalarRadioMedium {
#display("p=98,392");
}
connections:
accessPoint.ethg++ <--> Eth100M <--> router6.ethg++;
}
And the ini file:
[General]
network = WiredAndWirelessHostsWithAP
sim-time-limit = 24h
tkenv-plugin-path = ../../../etc/plugins
# number of client computers
*.n = 100
**.*Host*.numUdpApps = 3
**.*Host*.udpApp[0].typename = "UDPEchoApp"
**.*Host*.udpApp[0].localPort = 1000
**.*Host*.udpApp[*].typename = "UDPBasicApp"
**.*Host*.udpApp[1..].destPort = 1000
**.*Host*.udpApp[1..].messageLength = 100B
**.*Host*.udpApp[1..].sendInterval = 1s
**.*Host*.udpApp[1..].stopTime = 300s
Thank you in advance!
No, it is not possible to count received number of IPv6NeighbourDiscovery messages without modifying C++ files.
Related
It is clear to me that using Actors of course it is possible: for instance https://github.com/chbatey/akka-http-typed.git is using AkkaHttp and typed actors.
But it is unclear to me if just using AkkaStreams and its Alpakka connectors library (which includes databases), if is it possible to do regular CRUD / OLTP services, or just data replication from one database to another, or other OLAP / batch / stream processing scenarios.
If you know how it can be done please indicate a few details and if you can provide an example on github for instance that would be great.
The way I am thinking it may be possible is that the server is involved in two conversations / stateful stream transformation: one with the outside world over HTTP, and one with the database. I am not sure if this is possible to be modelled like that.
https://doc.akka.io/docs/alpakka/current/slick.html seems to offer both UPDATE/INSERTS as a Sink as well as pointed SELECT to a certain id as a Source. Do you know if an example app is there or can you broadly mention how the wiring would happen with Akka Http?
I put a demo here, hope it can help you.
Creating table, database is mysql.
CREATE TABLE test(id VARCHAR(32))
sbt:
"com.lightbend.akka" %% "akka-stream-alpakka-slick" % "1.1.0",
"mysql" % "mysql-connector-java" % "5.1.40"
Code:
package tech.parasol.scala.crud
import java.sql.SQLException
import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.http.scaladsl.server.Directives.{complete, get, path, _}
import akka.stream.alpakka.slick.scaladsl.{Slick, SlickSession}
import akka.stream.scaladsl.Sink
import akka.stream.{ActorAttributes, ActorMaterializer, Supervision}
import com.typesafe.config.ConfigFactory
import scala.concurrent.Future
import scala.io.StdIn
import scala.util.{Failure, Success}
object CrudTest1 {
def main(args: Array[String]): Unit = {
implicit val system = ActorSystem("CrudTest1")
implicit val materializer = ActorMaterializer()
implicit val executionContext = system.dispatcher
val hostName = "120.0.0.1"
val rocketDbConfig =
s"""
|db-config {
| profile = "slick.jdbc.MySQLProfile$$"
| db {
| dataSourceClass = "slick.jdbc.DriverDataSource"
| properties = {
| driver = "com.mysql.jdbc.Driver"
| url = "jdbc:mysql://${hostName}:3306/rocket?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useSSL=false"
| user = "root"
| password = "passw0rd"
| }
| }
|}
|
""".stripMargin
implicit val session = SlickSession.forConfig("db-config", ConfigFactory.parseString(rocketDbConfig))
import session.profile.api._
def persistence(message: String) = {
def insert(message: String): DBIO[Int] = {
sqlu"""INSERT INTO test(id) VALUES (${message})"""
}
session.db.run(insert(message)).map {
case _ => message
}.recover {
case e : SQLException => {
throw new Exception("Database error ===>")}
case e : Exception => {
throw new Exception("Database error.")}
}
}
val route = path("hello" / Segment ) { name =>
get {
val res = persistence(name)
onComplete(res) {
case Success(value) => {
complete(s"<h1>Say hello to ${name}</h1>")
}
case Failure(e) => {
complete(s"<h1>Failed to say hello to ${name}</h1>")
}
}
}
}
val bindingFuture = Http().bindAndHandle(route, "localhost", 8088)
println(s"Server online at http://localhost:8088/\nPress RETURN to stop...")
StdIn.readLine() // let it run until user presses return
bindingFuture
.flatMap(_.unbind()) // trigger unbinding from the port
.onComplete(_ => system.terminate()) // and shutdown when done
}
}
Yes, basically at every request receive in AkkaHttp, we create an AkkaStreams Graph (just a pipeline typically), basically just the Slick Alpakka Source from the database, maybe prefixed by some operators, and then returned in AkkaHttp, which of course supports Source. More details at [https://www.quora.com/Is-it-possible-to-build-an-OLTP-CRUD-HTTP-server-using-Akka-HTTP-Akka-Streams-Alpakka-and-a-database-Do-you-know-any-examples-of-code-on-GitHub-or-elsewhere/answer/Nicolae-Marasoiu]
" I'm new in neural networks and DL4j, and I want to train neural network with CSV and build linear regression. How can I fix these errors "Cannot resolve method'.iterations and getFeatureMatrix()'"?
"Previously I'm tried to do that, but have another error in 'seed'".
import org.datavec.api.records.reader.RecordReader;
import org.datavec.api.records.reader.impl.csv.CSVRecordReader;
import org.datavec.api.split.FileSplit;
import org.deeplearning4j.datasets.datavec.RecordReaderDataSetIterator;
import org.deeplearning4j.nn.api.OptimizationAlgorithm;
import org.deeplearning4j.nn.conf.MultiLayerConfiguration;
import org.deeplearning4j.nn.conf.NeuralNetConfiguration;
import org.deeplearning4j.nn.conf.Updater;
import org.deeplearning4j.nn.conf.layers.DenseLayer;
import org.deeplearning4j.nn.conf.layers.OutputLayer;
import org.deeplearning4j.nn.multilayer.MultiLayerNetwork;
import org.deeplearning4j.nn.weights.WeightInit;
import org.deeplearning4j.optimize.listeners.ScoreIterationListener;
import org.nd4j.evaluation.classification.Evaluation;
import org.nd4j.linalg.activations.Activation;
import org.nd4j.linalg.api.ndarray.INDArray;
import org.nd4j.linalg.dataset.api.DataSet;
import org.nd4j.linalg.dataset.api.iterator.DataSetIterator;
import org.nd4j.linalg.lossfunctions.LossFunctions;
import java.io.File;
public class Data {
public static void main(String[] args) throws Exception {
Parameters:
int seed = 3000;
int batchSize = 200;
double learningRate = 0.001;
int nEpochs = 150;
int numInputs = 2;
int numOutputs = 2;
int numHiddenNodes = 100;
Load data:
//load data train
RecordReader rr = new CSVRecordReader();
rr.initialize(new FileSplit(new File("train.csv")));
DataSetIterator trainIter = new RecordReaderDataSetIterator(rr, batchSize, 0, 2);
//load test data
RecordReader rrTest = new CSVRecordReader();
rr.initialize(new FileSplit(new File("test.csv")));
DataSetIterator testIter = new RecordReaderDataSetIterator(rrTest, batchSize, 0, 2);
Network Configuration:
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(seed)
.iterations(1000)
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.learningRate(learningRate)
.updater(Updater.NESTEROVS).momentum(0.9)
.list()
.layer(0, new DenseLayer.Builder()
.nIn(numInputs)
.nOut(numHiddenNodes)
.weightInit(WeightInit.XAVIER)
.activation(Activation.fromString("relu"))
.build())
.layer(1, new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
.weightInit(WeightInit.XAVIER)
.activation(Activation.fromString("softmax"))
.weightInit(WeightInit.XAVIER)
.nIn(numHiddenNodes)
.nOut(numOutputs)
.build()
)
.pretrain(false).backprop(true).build();
MultiLayerNetwork model = new MultiLayerNetwork(conf);
model.init();
model.setListeners(new ScoreIterationListener((15)));
for (int n = 0; n < nEpochs; n++) {
model.fit((trainIter));
System.out.println(("--------------eval model"));
Evaluation eval = new Evaluation(numOutputs);
while (testIter.hasNext()) {
DataSet t = testIter.next();
INDArray features = getFeatureMatrix();
INDArray lables = t.getLabels();
INDArray predicted = model.output(features, false);
eval.eval(lables, predicted);
}
System.out.println(eval.stats());
}
}
}
Logs
Build
First you should consider to use more class (like one for the definition of the neural network, one for the training process etc, ...). Just a best practice stuff.
I do not know which version of DL4J you're using but we can notice that getFeatureMatrix() has been removed. One more thing is that this function should be called on a DataSet object and not "statically" like you seem to do. (you should do t.getFeatureMatrix()).
It is pretty same things about iterations() function of the neural network creation; This function has been removed since some DL4J releases. You can get more information about this function on this thread. Now you have to find an alternative to set up number of iteration, you can take a look at this thread. Hope it is answering your question !
I am using OpenDaylight and trying to replace the default distributed database with Apache Ignite.
I am using the jar obtained by the source code here.
https://github.com/Romeh/akka-persistance-ignite
However, the class IgniteWriteJournal does not seem to load which i have checked by putting some print statements in its constuructor.
Is there any issue with the .conf file?
The following is a portion of the akka.conf file i am using in OpenDaylight.
odl-cluster-data {
akka {
remote {
artery {
enabled = off
canonical.hostname = "10.145.59.38"
canonical.port = 2550
}
netty.tcp {
hostname = "10.145.59.38"
port = 2550
}
# when under load we might trip a false positive on the failure detector
# transport-failure-detector {
# heartbeat-interval = 4 s
# acceptable-heartbeat-pause = 16s
# }
}
cluster {
# Remove ".tcp" when using artery.
seed-nodes = ["akka.tcp://opendaylight-cluster-data#10.145.59.38:2550"]
roles = ["member-1"]
}
extensions = ["akka.persistence.ignite.extension.IgniteExtensionProvider"]
akka.persistence.journal.plugin = "akka.persistence.journal.ignite"
akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot.ignite"
persistence {
# Ignite journal plugin
journal {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.journal.IgniteWriteJournal"
cache-prefix = "akka-journal"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where journal cache is already created
cachesAlreadyCreated = false
}
}
# Ignite snapshot plugin
snapshot {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.snapshot.IgniteSnapshotStore"
cache-prefix = "akka-snapshot"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where snapshot cache is already created
cachesAlreadyCreated = false
}
}
}
}
ignite {
//to start client or server node to connect to Ignite data cluster
isClientNode = false
// for ONLY testing we use localhost
// used for grid cluster connectivity
tcpDiscoveryAddresses = "localhost"
metricsLogFrequency = 0
// thread pools used by Ignite , should based into target machine specs
queryThreadPoolSize = 4
dataStreamerThreadPoolSize = 1
managementThreadPoolSize = 2
publicThreadPoolSize = 4
systemThreadPoolSize = 2
rebalanceThreadPoolSize = 1
asyncCallbackPoolSize = 4
peerClassLoadingEnabled = false
// to enable or disable durable memory persistance
enableFilePersistence = true
// used for grid cluster connectivity, change it to suit your configuration
igniteConnectorPort = 11211
// used for grid cluster connectivity , change it to suit your configuration
igniteServerPortRange = "47500..47509"
//durable memory persistance storage file system path , change it to suit your configuration
ignitePersistenceFilePath = "./data"
}
}
I assume you modified the configuration/initial/akka.conf. First those sections need to be inside the odl-cluster-data section (can't tell from just your snippet). Also it looks like the following should be:
akka.persistence.journal.plugin = "akka.persistence.journal.ignite"
akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot.ignite"
I am using OpenDaylight and trying to replace the default distributed database with Apache Ignite.
I am using the jar obtained by using the source code here:
https://github.com/Romeh/akka-persistance-ignite and deployed it in OpenDaylight karaf container.
The following is a portion of the akka.conf file i am using in OpenDaylight to replace the LevelDB journal with Apache Ignite.
odl-cluster-data {
akka {
loglevel = DEBUG
actor {
provider = "akka.cluster.ClusterActorRefProvider"
default-dispatcher {
# Configuration for the fork join pool
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 2
# Parallelism (threads) ... ceil(available processors * factor)
parallelism-factor = 2.0
# Max number of threads to cap factor-based parallelism number to
parallelism-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 10
}
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "10.145.59.44"
port = 2551
}
}
cluster {
seed-nodes = [
"akka.tcp://test#127.0.0.1:2551"
]
min-nr-of-members = 1
auto-down-unreachable-after = 30s
}
# Disable legacy metrics in akka-cluster.
akka.cluster.metrics.enabled=off
akka.persistence.journal.plugin = "akka.persistence.journal.ignite"
akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot.ignite"
extensions = ["akka.persistence.ignite.extension.IgniteExtensionProvider"]
persistence {
# Ignite journal plugin
journal {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.journal.IgniteWriteJournal"
plugin-dispatcher = "ignite-dispatcher"
cache-prefix = "akka-journal"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where journal cache is already created
cachesAlreadyCreated = false
}
}
# Ignite snapshot plugin
snapshot {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.snapshot.IgniteSnapshotStore"
plugin-dispatcher = "ignite-dispatcher"
cache-prefix = "akka-snapshot"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where snapshot cache is already created
cachesAlreadyCreated = false
}
}
}
}
}
However, the class IgniteWriteJournal does not seem to load which i have checked by putting some print statements in its constuructor as follows.
public IgniteWriteJournal(Config config) throws NotSerializableException {
System.out.println("!##$% inside IgniteWriteJournal constructor\n");
ActorSystem actorSystem = context().system();
serializer = SerializationExtension.get(actorSystem).serializerFor(PersistentRepr.class);
storage = new Store<>(actorSystem);
JournalCaches journalCaches = journalCacheProvider.apply(config, actorSystem);
sequenceNumberTrack = journalCaches.getSequenceCache();
cache = journalCaches.getJournalCache();
}
So what exactly happens to the class that is mentioned in the akka.persistence.journal.ignite tag? Does the constructor of that class get called? What exactly happens in the background when the akka.conf file is read?
Where are looking for the print outs - in data/log/karaf.log? System.out.println doesn't go there - use an org.slf4j.Logger.
How did you rebuild the IgniteWriteJournal source and deploy the new artifact? Are you sure your changes were actually deployed?
Using akka (.net) I am trying to implement simple cluster use case.
Cluster - for nodes up/down events.
Remote - for sending message to specific node.
There are two actors: Master Node which listening cluster events and Slave Node which connecting to the cluster.
Address address = new Address("akka.tcp", "ClusterSystem", "master", 8080);
cluster.Join(address);
When ClusterEvent.MemberUp message is reseived Master Node creating actor link:
ClusterEvent.MemberUp up = message as ClusterEvent.MemberUp;
ActorSelection nodeActor = system.ActorSelection(up.Member.Address + "/user/slave_0");
Sending message to this actor causes an error:
Association with remote system akka.tcp://ClusterSystem#slave:8090 has failed; address is now gated for 5000 ms. Reason is: [Disassociated]
master config:
akka {
actor {
provider = ""Akka.Cluster.ClusterActorRefProvider, Akka.Cluster""
}
remote {
helios.tcp {
port = 8080
hostname = master
bind-hostname = master
bind-port = 8080
send-buffer-size = 512000b
receive-buffer-size = 512000b
maximum-frame-size = 1024000b
tcp-keepalive = on
}
}
cluster{
failure-detector {
heartbeat - interval = 10 s
}
auto-down-unreachable-after = 10s
gossip-interval = 5s
}
stdout-loglevel = DEBUG
loglevel = DEBUG
debug {{
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}}
}
slave config:
akka {
actor {
provider = ""Akka.Cluster.ClusterActorRefProvider, Akka.Cluster""
}
remote {
helios.tcp {
port = 8090
hostname = slave
bind-hostname = slave
bind-port = 8090
send-buffer-size = 512000b
receive-buffer-size = 512000b
maximum-frame-size = 1024000b
tcp-keepalive = on
}
}
cluster{
failure-detector {
heartbeat - interval = 10 s
}
auto-down-unreachable-after = 10s
gossip-interval = 5s
}
stdout-loglevel = DEBUG
loglevel = DEBUG
debug {{
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}}
}
Here's your problem:
cluster{
failure-detector {
heartbeat - interval = 10 s
}
auto-down-unreachable-after = 10s
gossip-interval = 5s
}
heartbeat-interval and auto-down-unreachable-after are the same duration - therefore your nodes will almost always disassociate automatically after 10s, because you're betting on a race condition that the failure detector might lose.
auto-down-unreachable-after is a dangerous setting - do not use it. You'll end up with a split brain or worse.
And make sure your failure detector interval is always lower than your auto-down interval.