I need to mock an HBase for the unit test. Particularly, my program will require a Connection to HBase. How should I do this? I simply used HbaseTestingUtility.getConnection(). But obviously, it doesn't work.
Thank you!
This is how I got the connection established with the HBaseTestingUtil class (version 2.0.2):
import org.apache.hadoop.hbase.HBaseTestingUtility
val utility = new HBaseTestingUtiliy
utility.startMiniCluster() // defaults to 1 master and 1 region server
val connection = utility.getConnection()
In case you need to add some specific configuration (e.g. security settings) you can add hbase-site.xml to your resources.
Related
I want to set up the UI for the notary in cordApp samples. As the notary's Web port is not configured by default,I am trying to change the client's Gradle file to configure the notary.
Is there any other way to configure the notary's UI ?
I checked,It can be seen through the Node Explorer.Is there any other way to check the notary on web front?
You can configure the notary's webport in a similar way as you would configure for any other node.
Your notary must have an RPC address configured.
Once you have an rpc address configured, you can either use the default corda webserver (which is now deprecated) or you must configure your own webserver or use spring-webserver).
Without specifying the web port you can define your spring boot server, and connect to the node via RPC.
Step 1 Define your Spring boot server
#SpringBootApplication
private open class Starter
/**
* Starts our Spring Boot application.
*/
fun main(args: Array<String>) {
val app = SpringApplication(Starter::class.java)
app.setBannerMode(Banner.Mode.OFF)
app.isWebEnvironment = true
app.run(*args)
}
Step 2 Start your server by defining a starter task in your gradle build file
task runPartyAServer(type: JavaExec) {
classpath = sourceSets.main.runtimeClasspath
main = 'net.corda.server.ServerKt'
}
Step 3 Define the rpc configuration used to connect to the node.
server.port=10055
config.rpc.username=user1
config.rpc.password=test
config.rpc.host=localhost
config.rpc.port=10008
Step 4 Connect to the node using the above config defined.
val rpcAddress = NetworkHostAndPort(host, rpcPort)
val rpcClient = CordaRPCClient(rpcAddress)
val rpcConnection = rpcClient.start(username, password)
proxy = rpcConnection.proxy
Step 5 Use the proxy to connect to the notary node.
You can refer to the complete code here.
My spring boot web application uses Cassandra DB via the Datastax client and the connection occurs as follow:
public CassandraManager(#Autowired CassandraConfig cassandraConfig) {
config = cassandraConfig;
cluster = Cluster.builder()
.addContactPoint(config.getHost())
.build();
session = cluster.connect(config.getKeyspace());
}
When I run my Unit Tests, the spring boot application tries to load the CassandraManager Bean and connect to the Cassandra DB which is not up for the Unit Test as I do not need it. I get the following error: [localhost/127.0.0.1:9042] Cannot connect)
Is there a way to avoid loading this Cassandra Manager Bean to run my UT as they do not need to connect to the DB ? Is it a good practice to do so ?
You can try something like below which worked for me assuming, you are using spring-data-cassandra
First we create another configuration class which will be used for the tests that does not need cassandra connection. It is required as we need to exlude the CassandraDataAutoConfiguration class. Ex:
#SpringBootApplication(exclude = {CassandraDataAutoConfiguration.class})
public class NoCassandraConfig {
}
Then we will use this configuration on our test(s). Ex:
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
#RunWith(SpringRunner.class)
#ContextConfiguration(classes = NoCassandraConfig.class)
public class UtilitiesTest {
/* Lots of tests that does not need DB connection */
}
And there you go.
I'm interested in using azure's DocumentDB, but I can't see how to sensibly develop, run unittests / integration tests, or how to have our continuous integration server run against it.
AFAICS there's no way to run a local version of the docdb server, you only run against a provisioned instance of docdb in azure.
This means that:
each developer must dev against their own provisioned instance of docdb
each time the developer runs integration tests it's against (their own) remote docdb
continuous integration: I have to assume there's a way to programatically provision another docdb instance for the build? Even then the CI server is running against the remote docdb
Any advice on how people are approaching this with docdb would be much appreciated.
You are correct that there is no version of DocumentDB that you run on your own computers. So, I write unit tests for all stored procedures (sprocs) using documentdb-mock (runs client side on node.js). I do test first design (TDD) with this client side testing which has no requirement for connecting to Azure, but it only tests sprocs.
I run a number of other tests on the live Azure platform. In addition to the client-side tests, I test sprocs live with a real documentdb collection. I also test all client-side SDK code (only used for reads as I do all writes in sprocs) on the live system.
I used to have a single collection per developer for live testing but the fact each test can't guarantee the state of the database meant that some tests failed intermittently, so I switched to creating and deleting a database and collection for each test. It's slightly slower but not as slow as you would expect. I use node-unit and below is my setup and tear down code. Some points about this code:
I preload all sprocs every time since I use sprocs for all writes. I only use the client-side SDK for reads. You could skip this if you don't use sprocs.
I am using the documentdb-utils WrappedClient because it provides some added functionality (429 retry, better async API, etc.). It's a drop in replacement for the standard library (although it does not yet support partitioned collections) but you don't need to use it for the example code below to work.
The delay in the tear down was added to fix some intermittent failures that occurred when the collection was removed but some operations were still pending.
Each test file looks like this:
path = require('path')
{DocumentClient} = require('documentdb')
async = require('async')
{WrappedClient, loadSprocs, getLinkArray, getLink} = require('documentdb-utils')
client = null
wrappedClient = null
collectionLinks = null
exports.underscoreTest =
setUp: (setUpCallback) ->
urlConnection = process.env.DOCUMENT_DB_URL
masterKey = process.env.DOCUMENT_DB_KEY
auth = {masterKey}
client = new DocumentClient(urlConnection, auth)
wrappedClient = new WrappedClient(client)
client.deleteDatabase('dbs/dev-test-database', () ->
client.createDatabase({id: 'dev-test-database'}, (err, response, headers) ->
databaseLink = response._self
client.createCollection(databaseLink, {id: '1'}, {offerType: 'S2'}, (err, response, headers) ->
collectionLinks = getLinkArray(['dev-test-database'], [1])
scriptsDirectory = path.join(__dirname, '..', 'sprocs')
spec = {scriptsDirectory, client, collectionLinks}
loadSprocs(spec, (err, result) ->
sprocLink = getLink(collectionLinks[0], 'createVariedDocuments')
console.log("sprocs loaded for test")
setUpCallback(err, result)
)
)
)
)
test1: (test) ->
...
test.done()
test2: (test) ->
...
test.done()
...
tearDown: (callback) ->
f = () ->
client.deleteDatabase('dbs/dev-test-database', () ->
callback()
)
setTimeout(f, 500)
A local version from a DocumentDB is now available : https://learn.microsoft.com/en-us/azure/documentdb/documentdb-nosql-local-emulator
I would like to upload files to S3 using boto3.
The code will run on a server without DNS configured and I want that the upload process will be routed through a specific network interface.
Any idea if there's any way to solve these issues?
1) add the end point addresses for s3 to /etc/hosts, see this list http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
2) configure a specific route to the network interface, see this info on superuser
https://superuser.com/questions/181882/force-an-application-to-use-a-specific-network-interface
As for setting a network interface, I did a workaround that allows to set the source ip for each connection made by boto.
Just change awsrequest.py AWSHTTPConnection class with the following:
a) Before init() of AWSHTTPConnection add:
source_address = None
b) Inside the init() add:
if AWSHTTPConnection.source_address is not None:
kwargs["source_address"] = AWSHTTPConnection.source_address
Now, from your code you should do the following before you start using boto:
from botocore.awsrequest import AWSHTTPConnection
AWSHTTPConnection.source_address = (source_ip_str, source_port)
Use source_port = 0 in order to let OS choose random port (you probably want this option, see python socket docs for more details)
I've been using the latest code in 4.1.0-BUILD-SNAPSHOT as I need some of the new bug fixes in the 4.1 branch and just noticed that "neo4jServer()" is no longer a method exposed by Neo4jConfiguration. What is the new way to initialize a server connection and an in-memory version for unit tests? Before I was using "RemoteServer" and "InProcessServer", respectively.
Please note, the official documentation will be updated shortly.
In the meantime:
What's changed
SDN 4.1 uses the new Neo4j OGM 2.0 libraries. OGM 2.0 introduces API changes, largely due to the addition of support for Embedded as well as Remote Neo4j. Consequently, connection to a production database is now accomplished using an appropriate Driver, rather than using the RemoteServer or the InProcessServer which are deprecated.
For testing, we recommend using the EmbeddedDriver. It is still possible to create an in-memory test server, but that is not covered in this answer.
Available Drivers
The following Driver implementations are currently provided
http : org.neo4j.drivers.http.driver.HttpDriver
embedded : org.neo4j.drivers.embedded.driver.EmbeddedDriver
A driver implementation for the Bolt protocol (Neo4j 3.0) will be available soon.
Configuring a driver
There are two ways to configure a driver - using a properties file or via Java configuration. Variations on these themes exist (particularly for passing credentials), but for now the following should get you going:
Configuring the Http Driver
The Http Driver connects to and communicates with a Neo4j server over Http. An Http Driver must be used if your application is running in client-server mode. Please note the Http Driver will attempt to connect to a server running in a separate process. It can't be used for spinning up an in-process server.
Properties file configuration:
The advantage of using a properties file is that it requires no changes to your Spring configuration.
Create a file called ogm.properties somewhere on your classpath. It should contain the following entries:
driver=org.neo4j.ogm.drivers.http.driver.HttpDriver
URI=http://user:password#localhost:7474
Java configuration:
The simplest way to configure the Driver is to create a Configuration bean and pass it as the first argument to the SessionFactory constructor in your Spring configuration:
import org.neo4j.ogm.config.Configuration;
...
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.http.driver.HttpDriver")
.setURI("http://user:password#localhost:7474");
return config;
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(getConfiguration(), <packages> );
}
Configuring the Embedded Driver
The Embedded Driver connects directly to the Neo4j database engine. There is no server involved, therefore no network overhead between your application code and the database. You should use the Embedded driver if you don't want to use a client-server model, or if your application is running as a Neo4j Unmanaged Extension.
You can specify a permanent data store location to provide durability of your data after your application shuts down, or you can use an impermanent data store, which will only exist while your application is running (ideal for testing).
Create a file called ogm.properties somewhere on your classpath. It should contain the following entries:
Properties file configuration (permanent data store)
driver=org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver
URI=file:///var/tmp/graph.db
Properties file configuration (impermanent data store)
driver=org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver
To use an impermanent data store, simply omit the URI property.
Java Configuration
The same technique is used for configuring the Embedded driver as for the Http Driver. Set up a Configuration bean and pass it as the first argument to the SessionFactory constructor:
import org.neo4j.ogm.config.Configuration;
...
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver")
.setURI("file:///var/tmp/graph.db");
return config;
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(getConfiguration(), <packages> );
}
If you want to use an impermanent data store (e.g. for testing) do not set the URI attribute on the Configuration:
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver")
return config;
}