I was doing mongodump --host=replicasetname/host,host,host -d databasename --out=path and it works fine, ever.
But I have made a new replicaset, and when I do the same, I get nothing.
this happens when it works:
2016-02-10T11:58:29.152+0100 starting new replica set monitor for replica set ... with seeds ...:27017,...:27017,...:27017
2016-02-10T11:58:29.152+0100 creating new connection to:...:27017
2016-02-10T11:58:29.152+0100 [ReplicaSetMonitorWatcher] BackgroundJob starting: ReplicaSetMonitorWatcher
2016-02-10T11:58:29.152+0100 [ReplicaSetMonitorWatcher] starting
2016-02-10T11:58:29.213+0100 [ConnectBG] BackgroundJob starting: ConnectBG
2016-02-10T11:58:29.275+0100 connected to server ...:27017 (...)
2016-02-10T11:58:29.275+0100 connected connection!
2016-02-10T11:58:29.339+0100 changing hosts to ... from ...
connected to: ...
2016-02-10T11:58:29.339+0100 creating new connection to:...
2016-02-10T11:58:29.400+0100 [ConnectBG] BackgroundJob starting: ConnectBG
2016-02-10T11:58:29.460+0100 connected to server ...
2016-02-10T11:58:29.460+0100 connected connection!
2016-02-10T11:58:29.521+0100 DATABASE: ... to ...
2016-02-10T11:58:29.648+0100 ....system.indexes to /tmp/dump/...
...
...
this happens when it does not work
2016-02-10T12:03:02.181+0100 starting new replica set monitor for replica set ... with seeds ...
2016-02-10T12:03:02.181+0100 creating new connection to:...
2016-02-10T12:03:02.181+0100 [ReplicaSetMonitorWatcher] BackgroundJob starting: ReplicaSetMonitorWatcher
2016-02-10T12:03:02.181+0100 [ReplicaSetMonitorWatcher] starting
2016-02-10T12:03:02.192+0100 [ConnectBG] BackgroundJob starting: ConnectBG
2016-02-10T12:03:02.258+0100 connected to server ...
2016-02-10T12:03:02.258+0100 connected connection!
connected to: ...
2016-02-10T12:03:02.323+0100 creating new connection to:...
2016-02-10T12:03:02.323+0100 [ConnectBG] BackgroundJob starting: ConnectBG
2016-02-10T12:03:02.386+0100 connected to server ...
2016-02-10T12:03:02.386+0100 connected connection!
2016-02-10T12:03:02.450+0100 DATABASE: ... to ...
The out folder are created, but it is empty.
There are no errors showed.
The database exists, and it han collections and documents
Their configuration is different though
this one works
{
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
This other do not
{
"protocolVersion" : NumberLong(1),
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
What is happening here?
What can I do?
Thanks
Related
I am working on a Quarkus application to acct as an Operator in a OpenShift/Kubernetes cluster. When writing the tests using a kubernetesMockServer it is working fine for REST calls to developed application but when code runs inside an Initialization Block it is failing, in the log I see that mock server is replying with a 404 error:
2020-02-17 11:04:12,148 INFO [okh.moc.MockWebServer] (MockWebServer /127.0.0.1:53048) MockWebServer[57577] received request: GET /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions HTTP/1.1 and responded: HTTP/1.1 404 Client Error
On the TestCode I have:
#QuarkusTestResource(KubernetesMockServerTestResource.class)
#QuarkusTest
class TestAIRController {
#MockServer
KubernetesMockServer mockServer;
private CustomResourceDefinition crd;
private CustomResourceDefinitionList crdlist;
#BeforeEach
public void before() {
crd = new CustomResourceDefinitionBuilder()
.withApiVersion("apiextensions.k8s.io/v1beta1")
.withNewMetadata().withName("types.openshift.example-cloud.com")
.endMetadata()
.withNewSpec()
.withNewNames()
.withKind("Type")
.withPlural("types")
.endNames()
.withGroup("openshift.example-cloud.com")
.withVersion("v1")
.withScope("Namespaced")
.endSpec()
.build();
crdlist = new CustomResourceDefinitionListBuilder().withItems(crd).build();
mockServer.expect().get().withPath("/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions")
.andReturn(200, crdlist)
.always();
}
#Test
void test() {
RestAssured.when().get("/dummy").then().body("size()", Is.is(0));
}
}
The dummy rest is using the same code for searching the CRD, and in fact when running withouth the class observing the startup event it works fine
#Path("/dummy")
public class Dummy {
private static final Logger LOGGER =LoggerFactory.getLogger(Dummy.class);
#GET
#Produces(MediaType.APPLICATION_JSON)
public Response listCRDs(){
KubernetesClient oc = new DefaultKubernetesClient();
CustomResourceDefinition crd = oc.customResourceDefinitions()
.list().getItems().stream()
.filter( ob -> ob.getMetadata().getName().equals("types.openshift.example-cloud.com"))
.findFirst().get();
LOGGER.info("CRD NAME is {}", crd.getMetadata().getName());
return Response.ok(new ArrayList<String>()).build();
}
}
Finally this is an except of the
#ApplicationScoped
public class AIRWatcher {
private static final Logger LOGGER = LoggerFactory.getLogger(AIRWatcher.class);
void OnStart(#Observes StartupEvent ev) {
KubernetesClient oc = new DefaultKubernetesClient();
CustomResourceDefinition crd = oc.customResourceDefinitions()
.list().getItems().stream()
.filter( ob -> ob.getMetadata().getName().equals("types.openshift.example-cloud.com"))
.findFirst().get();
LOGGER.info("Using {}", crd.getMetadata().getName());
}
}
It's like for some reason the mock server is still not initialized for the Startup event, is there any way to solve it?
The problem is that the Mock Server is only configured to respond right before the test execution, while this code:
void OnStart(#Observes StartupEvent ev) {
KubernetesClient oc = new DefaultKubernetesClient();
CustomResourceDefinition crd = oc.customResourceDefinitions()
.list().getItems().stream()
.filter( ob -> ob.getMetadata().getName().equals("types.openshift.example-cloud.com"))
.findFirst().get();
LOGGER.info("Using {}", crd.getMetadata().getName());
}
runs when the application is actually comes up (which is before any #BeforeEach runs).
Can you please open an issue on the Quarkus Github? This should be something we provide a solution for
I am using OpenDaylight and trying to replace the default distributed database with Apache Ignite.
I am using the jar obtained by the source code here.
https://github.com/Romeh/akka-persistance-ignite
However, the class IgniteWriteJournal does not seem to load which i have checked by putting some print statements in its constuructor.
Is there any issue with the .conf file?
The following is a portion of the akka.conf file i am using in OpenDaylight.
odl-cluster-data {
akka {
remote {
artery {
enabled = off
canonical.hostname = "10.145.59.38"
canonical.port = 2550
}
netty.tcp {
hostname = "10.145.59.38"
port = 2550
}
# when under load we might trip a false positive on the failure detector
# transport-failure-detector {
# heartbeat-interval = 4 s
# acceptable-heartbeat-pause = 16s
# }
}
cluster {
# Remove ".tcp" when using artery.
seed-nodes = ["akka.tcp://opendaylight-cluster-data#10.145.59.38:2550"]
roles = ["member-1"]
}
extensions = ["akka.persistence.ignite.extension.IgniteExtensionProvider"]
akka.persistence.journal.plugin = "akka.persistence.journal.ignite"
akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot.ignite"
persistence {
# Ignite journal plugin
journal {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.journal.IgniteWriteJournal"
cache-prefix = "akka-journal"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where journal cache is already created
cachesAlreadyCreated = false
}
}
# Ignite snapshot plugin
snapshot {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.snapshot.IgniteSnapshotStore"
cache-prefix = "akka-snapshot"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where snapshot cache is already created
cachesAlreadyCreated = false
}
}
}
}
ignite {
//to start client or server node to connect to Ignite data cluster
isClientNode = false
// for ONLY testing we use localhost
// used for grid cluster connectivity
tcpDiscoveryAddresses = "localhost"
metricsLogFrequency = 0
// thread pools used by Ignite , should based into target machine specs
queryThreadPoolSize = 4
dataStreamerThreadPoolSize = 1
managementThreadPoolSize = 2
publicThreadPoolSize = 4
systemThreadPoolSize = 2
rebalanceThreadPoolSize = 1
asyncCallbackPoolSize = 4
peerClassLoadingEnabled = false
// to enable or disable durable memory persistance
enableFilePersistence = true
// used for grid cluster connectivity, change it to suit your configuration
igniteConnectorPort = 11211
// used for grid cluster connectivity , change it to suit your configuration
igniteServerPortRange = "47500..47509"
//durable memory persistance storage file system path , change it to suit your configuration
ignitePersistenceFilePath = "./data"
}
}
I assume you modified the configuration/initial/akka.conf. First those sections need to be inside the odl-cluster-data section (can't tell from just your snippet). Also it looks like the following should be:
akka.persistence.journal.plugin = "akka.persistence.journal.ignite"
akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot.ignite"
I'm using the following tool to run an embedded redis for unit testing.
In the beginning of my registrationService is creating a new instance of redis server.
#Import({RedisConfiguration.class})
#Service
public class RegistrationService
RedisTemplate redisTemplate = new RedisTemplate(); //<- new instance
public String SubmitApplicationOverview(String OverviewRequest) throws IOException {
. . .
HashMap<String,Object> applicationData = mapper.readValue(OverviewRequest,new TypeReference<Map<String,Object>>(){});
redisTemplate.setHashKeySerializer(new StringRedisSerializer());
redisTemplate.setHashValueSerializer(new GenericJackson2JsonRedisSerializer());
UUID Key = UUID.randomUUID();
redisTemplate.opsForHash().putAll(Key.toString(), (applicationData)); //<-- ERRORS HERE
System.out.println("Application saved:" + OverviewRequest);
return Key.toString();
}
}
And I'm starting a mock redis server in my Test below.
...
RedisServer redisServer;
#Autowired
RegistrationService RegistrationService;
#Before
public void Setup() {
redisServer = RedisServer.newRedisServer();
redisServer.start();
}
#Test
public void testSubmitApplicationOverview() {
String body = "{\n" +
" \"VehicleCategory\": \"CAR\",\n" +
" \"EmailAddress\": \"email#email.com\"\n" +
"}";
String result = RegistrationService.SubmitApplicationOverview(body);
Assert.assertEquals("something", result);
}
Redis settings in application.properties
#Redis Settings
spring.redis.cluster.nodes=slave0:6379,slave1:6379
spring.redis.url= redis://jx-staging-redis-ha-master-svc.jx-staging:6379
spring.redis.sentinel.master=mymaster
spring.redis.sentinel.nodes=10.40.2.126:26379,10.40.1.65:26379
spring.redis.database=2
However, I'm getting a java.lang.NullPointerException error on the following line in my service under test (registrationService).
redisTemplate.opsForHash().putAll(Key.toString(), (applicationData));
According to the [redis-mock][1] documentation, creating an instance like this:
RedisServer.newRedisServer(); // bind to a random port
Will bind the instance to a random port. It looks like your code expects a specific port. I believe that you need to specify a port when you create the server by passing a port number like this:
RedisServer.newRedisServer(8000); // bind to a specific port
I am trying to access kafka deployed on AWS server with public IP , however when trying to connect it and send some data i receive no response and the server connection is closed.Following is my producer code --
public SensorDevice() {
Properties props = new Properties();
props.put("metadata.broker.list", "myip-xyz:9092");
props.put("bootstrap.servers", "myip-xyz:9092");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
// props.put("partitioner.class", "example.producer.SimplePartitioner");
props.put("request.required.acks", "1");
producer = new KafkaProducer<String, String>(props);
}
public void run() {
Object objectData = new Object();
ProducerRecord<String, String> data = new ProducerRecord<String, String>(
topic, "mytopic", objectData.toString());
System.out.println(data);
Future<RecordMetadata> rs = producer.send(data,
new org.apache.kafka.clients.producer.Callback() {
#Override
public void onCompletion(RecordMetadata recordMetadata,
Exception arg1) {
System.out.println("Received ack for partition="
+ recordMetadata.partition() + " offset = "
+ recordMetadata.offset());
}
});
try {
String msg = "";
RecordMetadata rm = rs.get();
msg = msg + " partition = " + rm.partition() + " offset ="
+ rm.offset();
System.out.println(msg);
} catch (Exception e) {
System.out.println(e);
}
producer.close();
}
I have also tried adding advertise.host.name to server.properties config file.
Kafka shows following error --
> [2015-04-24 09:06:35,329] INFO Created log for partition [mytopic,0] in /tmp/kafka-logs with properties {segment.index.bytes ->
> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 1073741824,
> flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000,
> index.interval.bytes -> 4096, retention.bytes -> -1,
> min.insync.replicas -> 1, cleanup.policy -> delete,
> unclean.leader.election.enable -> true, segment.ms -> 604800000,
> max.message.bytes -> 1000012, flush.messages -> 9223372036854775807,
> min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000,
> segment.jitter.ms -> 0}. (kafka.log.LogManager)
> [2015-04-24 09:06:35,330] WARN Partition [mytopic,0] on broker 0: No checkpointed highwatermark is found for partition [mytopic,0]
> (kafka.cluster.Partition)
> [2015-04-24 09:07:34,788] INFO Closing socket connection to /50.156.87.157. (kafka.network.Processor)
Please help me resolve this issue!
EC2 IP addresses are internal. You may face some issues when dealing with EC2 server running kafka and zookeeper. Try setting advertised.host.name and advertised.port variables in your server.properties file.
advertised.host.name should be IP address of the EC2 server.
advertised.port should be kafka port. By default it is 9092.
I did a sample application & run the application from 2 different machines where both application is using AppFabric cache. I set the pollInterval="120" secs in both applications config file with below settings:
<localCache isEnabled="true"
sync="NotificationBased"
ttlValue="300"
objectCount="10"/>
<!--(optional) specify cache notifications poll interval-->
<clientNotification pollInterval="120" />
Also enabled Notification in cluster using powershell.
Now from Machine1 I read the key called key1 whose value is "Value1".
then from Machine2 I changed the value of key1 to "Changed".
then from Machine2 I read the key called key1 whose value is now displayed as "Changed".
then after the poll interval period which is 2 mnts I read the key called key1 from Machine1, whose value is now displayed still as "Value1". Why it's not displaying "Changed".
Why the change is not detected by the application in Machine1? Why the local cache invalidation not occurring?
At Ahmed Ilyas:>
show the code you are using to read and write to the cache. you also have not explained how you configured AppFabric and these machines. are they joined to the cluster?
I have done reading through AFC Read-Through API. which is done in separate project. Write to cache is done just by Put() method. As this is a sample project, so I though no need to update the database only update at cache cluster.
The above config settings for each application running in 2 machines.
I have allowed access for these 2 machines by granting access to them in cache cluster. 1 machine is both AFC server & cache client(i.e Machine1).
Hope this helps you to answer. Find the code below:
public class CacheUtil
{
private static DataCacheFactory _factory = null;
private static DataCache _cache = null;
static CacheUtil()
{
if (_cache == null)
{
// Declare array for cache host(s).
DataCacheServerEndpoint[] servers = new DataCacheServerEndpoint[1];
servers[0] = new DataCacheServerEndpoint("H1011.hoboo.net", 22233);
// Set the local cache properties. In this example, it
// is timeout-based with a timeout of 300 seconds(5mnts).
DataCacheLocalCacheProperties localCacheConfig;
TimeSpan localTimeout = new TimeSpan(0, 5, 0);
localCacheConfig = new DataCacheLocalCacheProperties(60, localTimeout, DataCacheLocalCacheInvalidationPolicy.TimeoutBased);
// Setup the DataCacheFactory configuration.
DataCacheFactoryConfiguration factoryConfig = new DataCacheFactoryConfiguration();
//factoryConfig.ChannelOpenTimeout = new TimeSpan(0, 0, 0);
//factoryConfig.Servers = servers;
//factoryConfig.LocalCacheProperties = localCacheConfig;
_factory = new DataCacheFactory();
//_factory = new DataCacheFactory(factoryConfig);
_cache = _factory.GetCache("default");
}
}
public static DataCache GetCache()
{
if (_cache != null) return _cache;
try
{
RuntimeContext.WriteAppFabricErrorLog(new AppFabricLogger()
{
CacheKey = "Connected to AppFabric Cache Server.",
CacheData = "Connected to AppFabric Cache Server.",
ErrorString = "Connected to AppFabric Cache Server."
});
}
catch (Exception ex)
{
//Suppress Error
}
return _cache;
}
}
Other class which has Get():>
public static object Get(string pName)
{
object cachedItem = null;
try
{
//Check configuration settings for AppFabric.
bool appFabricCache;
bool.TryParse(System.Configuration.ConfigurationManager.AppSettings["AppFabricCache"], out appFabricCache);
if (appFabricCache)
{
//Get data from AppFabric Cache Server.
cachedItem = CacheUtil.GetCache().Get(pName);
}
else
{
//Get data from Local Cache Server.
cachedItem = RuntimeContextOlderVersion.Get(pName);
}
}
catch (Exception Ex)
{
//If failes, write reason to log file.
WriteAppFabricErrorLog(new AppFabricLogger()
{
CacheKey = pName,
CacheData = "Get Method",
ErrorString = Ex.ToString()
});
}
return cachedItem;
}
#stuartd
Yes I have enabled notifictions. You can see that in my appconfig.
For Staurt:>
PS C:\Windows\system32> get-cacheconfig
cmdlet Get-CacheConfig at command pipeline position 1
Supply values for the following parameters:
CacheName: default
CacheName : default
TimeToLive : 10 mins
CacheType : Partitioned
Secondaries : 0
MinSecondaries : 0
IsExpirable : True
EvictionType : LRU
NotificationsEnabled : True
WriteBehindEnabled : False
WriteBehindInterval : 300
WriteBehindRetryInterval : 60
WriteBehindRetryCount : -1
ReadThroughEnabled : True
ProviderType : SampleProvider.Provider,SampleProvider, Version=1.0.
0.0, Culture=neutral, PublicKeyToken=cde85af3c5f6411
e
ProviderSettings : {"DBConnection"="Database=Test123;Server=..**.
**;uid=****;pwd=*****;connection timeout=5000"}