Player Spawning Bug - pawn

When I run my game mode the player is supposed to spawn at the coordinates given in OnPlayerSpawn. Its works perfectly but after I press the "Spawn" button the screen flashes showing my player at the default samp coordinates(the little place with the elevators) and then spawns to the correct position. How can I fix that flash?
Tried everything here so far. http://forum.sa-mp.com/showthread.php?t=480083
public OnGameModeInit()
{
SetGameModeText("Closed Beta");
AddPlayerClass(210, 1958.3783, 1343.1572, 15.3746, 269.1425, 24, 123123, 25, 123123, 34, 123123);
AddPlayerClass(230, 1958.3783, 1343.1572, 15.3746, 269.1425, 24, 123123, 25, 123123, 34, 123123);
AddPlayerClass(38, 1958.3783, 1343.1572, 15.3746, 269.1425, 24, 123123, 25, 123123, 34, 123123);
AddPlayerClass(53, 1958.3783, 1343.1572, 15.3746, 269.1425, 24, 123123, 25, 123123, 34, 123123);
AddPlayerClass(134, 1958.3783, 1343.1572, 15.3746, 269.1425, 24, 123123, 25, 123123, 34, 123123);
AddPlayerClass(170, 1958.3783, 1343.1572, 15.3746, 269.1425, 24, 123123, 25, 123123, 34, 123123);
AddPlayerClass(177, 1958.3783, 1343.1572, 15.3746, 269.1425, 24, 123123, 25, 123123, 34, 123123);
//Create some nrg's
AddStaticVehicle(522,1293.4578,-831.7721,82.7060,2.2115,8,164);
AddStaticVehicle(522,1291.4998,-832.0693,82.7150,0.9063,1,89);
AddStaticVehicle(522,1289.1138,-832.2216,82.7144,1.8299,20,0);
AddStaticVehicle(522,1286.9045,-832.1899,82.7124,357.0424,150,49);
AddStaticVehicle(522,1283.9424,-832.0230,82.7132,1.8036,200,78);
AddStaticVehicle(411,1242.7148,-805.3672,83.8677,178.3237,1,0); // infernous
AddStaticVehicle(451,1248.9519,-805.4118,83.8476,178.6428,0,1); // turismo
AddStaticVehicle(541,1255.2100,-804.8454,83.7656,184.1492,1,0); // bullet
//Create the textdraw for welcoming the player
gWelcome = TextDrawCreate(320.0, 240.0, "Welcome to Uuri's Mapping Server!");
TextDrawSetShadow(gWelcome, 3);
TextDrawSetOutline(gWelcome, 2);
TextDrawFont(gWelcome, 3);
TextDrawBoxColor(gWelcome, 0x003333FF);
TextDrawAlignment(gWelcome, 2);
//Use the running animations I like
UsePlayerPedAnims();
return 1;
}
public OnPlayerConnect(playerid)
{
//Show the player the welcome textdraw
TextDrawShowForPlayer(playerid, gWelcome);
//Send a client message to all players on the current player connecting.
new name[MAX_PLAYER_NAME];
new string[128];
GetPlayerName(playerid, name, sizeof(name));
format(string, sizeof(string), "%s has connected to the server!", name);
SendClientMessageToAll(LIGHT_GREEN, string);
return 1;
}
public OnPlayerRequestClass(playerid, classid)
{
SetPlayerPos(playerid, 2000.5919,1527.3319,14.6172);
SetPlayerCameraPos(playerid, 2000.6874,1528.9332,14.6223);
SetPlayerCameraLookAt(playerid, 2000.5919,1527.3319,14.6172);
SetPlayerFacingAngle(playerid, 358.2982);
return 1;
}
public OnPlayerSpawn(playerid)
{
//Hide the textdraw when the player spawns
TextDrawHideForPlayer(playerid, gWelcome);
//Print a client message to all players when a player spawns
new name[MAX_PLAYER_NAME];
new string [128];
GetPlayerName(playerid, name, sizeof(name));
format(string, sizeof(string), "%s has spawned!", name);
SendClientMessageToAll(LIGHT_GREEN, string);
//Set the players coordinates to spawn; put camera behind player
SetPlayerPos(playerid, 1280.5247,-813.6929,83.6532);
SetCameraBehindPlayer(playerid);
SetPlayerFacingAngle(playerid, 180.0);
return 1;
}

This is an issue with SA-MP and is just the way it works. The spawn data that is configured with AddPlayerClass (or SetSpawnInfo as well) is handled client side. That means that the client will automatically spawn at the given co-ordinates passed to AddPlayerClass immediately with no server interaction. By the time the server does however detect that the player has respawned it will execute the code that you have specified on them above.
One way that you can bypass this is instead of using SetPlayerPos in OnPlayerSpawn, you can just edit your AddPlayerClass function calls to include the co-ordinates that you wish the player to spawn at.
For example:
AddPlayerClass(210, 1280.5247,-813.6929,83.6532, 269.1425, 24, 123123, 25, 123123, 34, 123123);

Related

How do I load my Solana wallet using my private key file?

I'm attempting to use the private key I generated using the Solana command-line to create a wallet in JavaScript / Node.
I want to use the web3.Keypair.fromSeed() method.
Here are the steps I've take so far.
created a solana wallet : solana-keygen new -o keyfile.json
display what is in that file -- it's a 64 byte array (this is a test key so no worries that this is the private key
[237,158,92,107,132,192,1,57,8,20,213,108,29,227,37,8,3,105,196,244,8,221,184,199,62,253,98,131,33,165,165,215,14,7,46,23,221,242,240,226,94,79,161,31,192,163,13,25,106,53,34,215,83,124,162,156,8,97,194,180,213,179,33,68]
However, the call to fromSeed() only wants 32 bytes.
3. check the solana address so I know when everything is working properly :
> solana address -->
wm4MapPtFaUoSsyBJppVnChYMGvHzTeAL6BNZ5GmSqH
That's the public key
How do I call web3.Keypair.fromSeed() to load that private key and get my public address (aka public key)?
let web3 = require('#solana/web3.js');
let splToken = require('#solana/spl-token');
// load up the first 32 bytes of the 64 byte array that was in our keyfile.json
// Only need the first 32 bytes so I use slice() just to make sure it's the correct length
let firstWinPrivKey = [237,158,92,107,132,192,1,57,8,20,213,108,29,227,37,8,3,105,196,244,8,221,184,199,62,253,98,131,33,165,165,215,14,7,46,23,221,242,240,226,94,79,161,31,192,163,13,25,106,53,34,215,83,124,162,156,8,97,194,180,213,179,33,68]
.slice(0,32);
// print the length of the array so we know it is correct
// the fromSeed() method requires 32 bytes
console.log(firstWinPrivKey.length);
let firstWinWallet = web3.Keypair.fromSeed(Uint8Array.from(firstWinPrivKey));
console.log(firstWinWallet.secretKey);
console.log(firstWinWallet.publicKey.toString());
Notice that you have to cast the array to a Uint8Array (Uint8Array.from())
When we print out the secretKey, you'll see the same bytes you passed in.
And finally when we print out the publicKey you'll see that same value that we saw with the command line
> solana address
Now you can use the wallet in code.
Here's the final output from this short script:
32
Uint8Array(64) [
237, 158, 92, 107, 132, 192, 1, 57, 8, 20, 213,
108, 29, 227, 37, 8, 3, 105, 196, 244, 8, 221,
184, 199, 62, 253, 98, 131, 33, 165, 165, 215, 14,
7, 46, 23, 221, 242, 240, 226, 94, 79, 161, 31,
192, 163, 13, 25, 106, 53, 34, 215, 83, 124, 162,
156, 8, 97, 194, 180, 213, 179, 33, 68
]
wm4MapPtFaUoSsyBJppVnChYMGvHzTeAL6BNZ5GmSqH
If you want to use ".json" file, you can do something like this:
import Fs from "#supercharge/fs";
import { Connection, Keypair, LAMPORTS_PER_SOL } from "#solana/web3.js";
const decodedKey = new Uint8Array(
JSON.parse(
//replace with actual path from home dir. For example '.config/solana/devnet.json'
Fs.readFileSync(Fs.homeDir("path to key.json")).toString();
));
let keyPair = Keypair.fromSecretKey(decodedKey);
I use additional package https://www.npmjs.com/package/#supercharge/fs for working with files.
import { Keypair } from "#solana/web3.js";
import fs from "fs";
function loadKeypairFromFile(filename: string): Keypair {
const secret = JSON.parse(fs.readFileSync(filename).toString()) as number[];
const secretKey = Uint8Array.from(secret);
return Keypair.fromSecretKey(secretKey);
}

Input size mismatch error when using pre-trained inceptionV3 model for image classification

I'm facing trouble when training a model using pre-trained inceptionV3 for my own image data set.
I'm loading images using data.Dataset loader and 'transforms' for image transformation.
Here's my inceptionV3 model
inceptionV3 = torchvision.models.inception_v3(pretrained=True)
pretrained_model = nn.Sequential(*list(inceptionV3.children()))
pretrained_model = nn.Sequential(*list(pretrained_features.children())[:-1])
for param in pretrained_model.parameters(): param.requires_grad = False
Here's my transforms code
data_transforms = transforms.Compose([
transforms.Scale((299,299)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
I'm getting this error at the line 'out_features = pretrained_model(inputs)'
Traceback (most recent call last): File "build_models.py", line 42, in <module> use_gpu=use_gpu)
File "/home/ubuntu/apparel-styles/ml_src/classifiers.py", line 488, in create_attributes_model batch_size, num_workers, num_epochs, use_gpu=use_gpu)
File "/home/ubuntu/apparel-styles/ml_src/classifiers.py", line 276, in train_model flatten_pretrained_out=flatten_pretrained_out)
File "/home/ubuntu/apparel-styles/ml_src/classifiers.py", line 181, in train_attribute_model out_features = pretrained_model(inputs)
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__ result = self.forward(*input, **kwargs)
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 68, in forward outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 78, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply raise output
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 42, in _worker output = module(*input, **kwargs)
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__ result = self.forward(*input, **kwargs)
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/container.py", line 67, in forward input = module(input)
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__ result = self.forward(*input, **kwargs)
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torchvision/models/inception.py", line 312, in forward x = self.fc(x)
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__ result = self.forward(*input, **kwargs)
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward return F.linear(input, self.weight, self.bias)
File "/home/ubuntu/apparel-styles/env/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 835, in linear return torch.addmm(bias, input, weight.t()) RuntimeError: size mismatch at /pytorch/torch/lib/THC/generic/THCTensorMathBlas.cu:243
After getting this error, I tried to print the input size. It's value is
print(inputs.shape)
torch.Size([256, 3, 299, 299])

error while using pysnmp with the option privProtocol=usm3DESEDEPrivProtocol

I want to get information from a snmp device which is configured with auth protocol MD5 and priv protocol 3DES.
The configuration statement is
snmp-server user testuser testgroup v3 auth md5 authmd5pwd priv 3des privpwddes
The pysnmp code to access the device is
from pysnmp.hlapi import *
def get(mib):
errorIndication, errorStatus, errorIndex, varBinds = next(
getCmd(SnmpEngine(),
UsmUserData('nmsuser', 'authmd5pwd', 'privpwddes',
authProtocol=usmHMACMD5AuthProtocol,
privProtocol=usm3DESEDEPrivProtocol),
UdpTransportTarget(('10.96.158.251', 161)),
ContextData(),
ObjectType(ObjectIdentity(mib)))
)
if errorIndication:
print(errorIndication)
elif errorStatus:
print('%s at %s' % (errorStatus.prettyPrint(),
errorIndex and varBinds[int(errorIndex) - 1][0] or '?'))
else:
for varBind in varBinds:
print(varBind[1])
if __name__ == "__main__":
get('.1.3.6.1.2.1.1.5.0')
When I execute this I get
Traceback (most recent call last):
File "pytest.py", line 24, in
get('.1.3.6.1.2.1.1.5.0')
File "pytest.py", line 11, in get
ObjectType(ObjectIdentity(mib)))
File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/hlapi/asyncore/sync/cmdgen.py", line 113, in getCmd
File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/carrier/asyncore/dispatch.py", line 50, in runDispatcher
pysnmp.error.PySnmpError: poll error: Traceback (most recent call last):
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/carrier/asyncore/dispatch.py", line 46, in runDispatcher
use_poll=True, map=self.__sockMap, count=1)
; File "/usr/lib/python2.7/asyncore.py", line 220, in loop
poll_fun(timeout, map)
; File "/usr/lib/python2.7/asyncore.py", line 201, in poll2
readwrite(obj, flags)
; File "/usr/lib/python2.7/asyncore.py", line 123, in readwrite
obj.handle_error()
; File "/usr/lib/python2.7/asyncore.py", line 108, in readwrite
obj.handle_read_event()
; File "/usr/lib/python2.7/asyncore.py", line 449, in handle_read_event
self.handle_read()
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/carrier/asyncore/dgram/base.py", line 163, in handle_read
self._cbFun(self, transportAddress, incomingMessage)
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/carrier/base.py", line 70, in _cbFun
self, transportDomain, transportAddress, incomingMessage
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/entity/engine.py", line 152, in __receiveMessageCbFun
self, transportDomain, transportAddress, wholeMsg
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/proto/rfc3412.py", line 344, in receiveMessage
statusInformation
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/proto/rfc3412.py", line 533, in __expireRequest
cachedParams['cbCtx'])
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/entity/rfc3413/cmdgen.py", line 104, in processResponsePdu
(origSendRequestHandle, cbFun, cbCtx))
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/proto/rfc3412.py", line 153, in sendPdu
pduVersion, PDU, expectResponse, sendPduHandle
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/proto/mpmod/rfc3412.py", line 240, in prepareOutgoingMessage
securityEngineId, securityName, securityLevel, scopedPDU
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/proto/secmod/rfc3414/service.py", line 525, in generateRequestMsg
None)
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/proto/secmod/rfc3414/service.py", line 395, in __generateRequestOrResponseMsg
(snmpEngineBoots, snmpEngineTime, None), dataToEncrypt
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/proto/secmod/eso/priv/des3.py", line 117, in encryptData
encryptKey, snmpEngineBoots
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/proto/secmod/eso/priv/des3.py", line 77, in __getEncryptionKey
preIV = privKey[24:32]
; File "/usr/local/lib/python2.7/dist-packages/pyasn1/type/univ.py", line 996, in __getitem__
return self.clone(self._value[i])
; File "/usr/local/lib/python2.7/dist-packages/pysnmp-4.4.3-py2.7.egg/pysnmp/proto/rfc1902.py", line 202, in clone
return univ.OctetString.clone(self, *args, **kwargs).setFixedLength(self.getFixedLength())
; File "/usr/local/lib/python2.7/dist-packages/pyasn1/type/base.py", line 349, in clone
return self.__class__(value, **initilaizers)
; File "/usr/local/lib/python2.7/dist-packages/pyasn1/type/univ.py", line 819, in __init__
base.AbstractSimpleAsn1Item.__init__(self, value, **kwargs)
; File "/usr/local/lib/python2.7/dist-packages/pyasn1/type/base.py", line 246, in __init__
raise exType('%s at %s' % (exValue, self.__class__.__name__))
;ValueConstraintError: , > failed at: ValueConstraintError(" failed at: ValueConstraintError('',)",) at OctetString
Seems to be a bug in pysnmp 4.4.3 when using 3DES with short-hash HMAC such as MD5.
You can either use some other AUTH algorithm which produces at least 32-octet long hash or pull fixed pysnmp (4.4.4) from its master branch.
Your script seems to work with the simulator once you change username/passphrase.

Using tensorflow command in keras

Is it possible in keras to use tensorflow tensor manipulations commands for altering the shape of data, of split by an axis or somethign like that..
I am currently trying something like this:
def row_convolution(input):
filter_size = 8
temp_list = []
input = tf.placeholder(tf.float32,shape=(None,33,1,8,45,3))
for section in tf.unstack(input,axis=1):
section = tf.reshape(section,[1,8,45,3])
temp_list.append((Conv2D(filters = 1, kernel_size = (8,45), activation='relu' , name = 'conv')(section)))
print "len: " + str(len(temp_list))
return tf.convert_to_tensor(np.array(temp_list))
Which is is the function for a lamdba layer, but the last command give me an error message stating:
Traceback (most recent call last):
File "keras_cnn_phoneme_original_fit_generator.py", line 182, in <module>
fws()
File "keras_cnn_phoneme_original_fit_generator.py", line 167, in fws
model.add(Lambda(row_convolution,input_shape = (33,1,8,45,3)))
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 422, in add
layer(x)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 554, in __call__
output = self.call(inputs, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/layers/core.py", line 659, in call
return self.function(inputs, **arguments)
File "keras_cnn_phoneme_original_fit_generator.py", line 147, in row_convolution
return tf.convert_to_tensor(np.array(temp_list))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 637, in convert_to_tensor
as_ref=False)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 702, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 110, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 99, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 451, in make_tensor_proto
append_fn(tensor_proto, proto_values)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 109, in SlowAppendObjectArrayToTensorProto
tensor_proto.string_val.extend([compat.as_bytes(x) for x in proto_values])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/compat.py", line 65, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got <tf.Tensor 'lambda_1/conv/Relu:0' shape=(1, 1, 1, 1) dtype=float32>

Realm unit test with Robolectric and PowerMock generating zero % code coverage

I followed the instructions here:
https://github.com/robolectric/robolectric/wiki/Using-PowerMock
https://github.com/realm/realm-java/tree/master/examples/unitTestExample
to set up my realm unit tests. The tests all pass but when I try to generate the code coverage to push to SonarQube, the coverage report shows 0% coverage. When I execute the standard Android Studio code coverage (as described in the answer here: How to get code coverage using Android Studio?) it generates a report which shows 91% coverage. The problem with this approach however, is that it's an html report and there's no way to let it generate an xml report which can be used for SonarQube.
This is the code of the test class
#RunWith(RobolectricTestRunner.class)
#PowerMockIgnore({"org.mockito.*", "org.robolectric.*", "android.*"})
#PrepareForTest({Realm.class, RealmConfiguration.class, RealmCore.class, RealmLog.class})
#SuppressStaticInitializationFor("io.realm.internal.Util")
#Config(constants = BuildConfig.class, manifest = "src/main/AndroidManifest.xml", sdk = 21)
public class DecisionTreeTest
{
#Captor
ArgumentCaptor<Realm.Transaction.Callback> realmCallbackCaptor;
// Robolectric, Using Power Mock https://github.com/robolectric/robolectric/wiki/Using-PowerMock
#Rule
public PowerMockRule rule = new PowerMockRule();
private Context mockContext;
private final byte[] fakeRealmKey = {
-122, -115, -113, -111, -105, -104, -101, -99, -94, -93, -90, -87,
-77, -74, -67, -66, -63, -61, -56, -53, -48, -47, -33, -31,
-30, -28, -22, -17, -5, -3, -1, 3, 8, 11, 17, 18,
21, 22, 27, 30, 40, 42, 51, 52, 53, 54, 57, 59,
61, 63, 67, 70, 74, 76, 78, 85, 90, 91, 103, 108,
113, 117, 119, 127
};
#Before
public void setUp() throws Exception
{
// Setup Realm to be mocked. The order of these matters
mockStatic(RealmCore.class);
mockStatic(RealmLog.class);
mockStatic(Realm.class);
mockStatic(RealmConfiguration.class);
this.mockContext = RuntimeEnvironment.application;
Whitebox.setInternalState(
Realm.class,
"applicationContext",
RuntimeEnvironment.application);
/*
Better solution would be just mock the RealmConfiguration.Builder class.
But it seems there is some problems for powermock to mock it (static inner class).
We just mock the RealmCore.loadLibrary(Context) which will be called by
RealmConfiguration.Builder's constructor.
*/
doNothing().when(RealmCore.class);
RealmCore.loadLibrary(any(Context.class));
}
#Test(expected = DecisionTreeException.class)
public void persistSurvey_DecisionTreeRealmNotEnabled_ThrowsException() throws Exception
{
DecisionTree decisionTree = createSimpleDecisionTree();
Survey survey = decisionTree.getSurveyFromResource(R.raw.survey);
decisionTree.persistSurvey(survey, null, null);
}
#Test(expected = DecisionTreeException.class)
public void persistSurvey_NullAsFirstParam_ThrowsException() throws Exception
{
DecisionTree decisionTree = createRealmDecisionTree();
decisionTree.persistSurvey(null, null, null);
}
#Test
public void persistSurvey_SurveyAsFirstParam_ThrowsException() throws Exception
{
final Realm mockRealm = mock(Realm.class);
when(Realm.getInstance(any(RealmConfiguration.class))).thenReturn(mockRealm);
org.mockito.stubbing.Answer<Void> executeAnswer = new org.mockito.stubbing.Answer<Void>()
{
#Override
public Void answer(InvocationOnMock invocation) throws Throwable
{
((Realm.Transaction) invocation.getArguments()[0]).execute(mockRealm);
return null;
}
};
doAnswer(executeAnswer)
.when(mockRealm)
.executeTransactionAsync(
any(Realm.Transaction.class),
any(Realm.Transaction.OnSuccess.class),
any(Realm.Transaction.OnError.class));
DecisionTree decisionTree = createRealmDecisionTree();
Survey survey = decisionTree.getSurveyFromResource(R.raw.survey);
decisionTree.persistSurvey(survey, null, null);
verify(mockRealm).executeTransactionAsync(
any(Realm.Transaction.class),
any(Realm.Transaction.OnSuccess.class),
any(Realm.Transaction.OnError.class));
verify(mockRealm).copyToRealmOrUpdate(any(Survey.class));
}
private DecisionTree createRealmDecisionTree()
{
return new DecisionTree.Builder()
.setContext(mockContext)
.setRealmKey(fakeRealmKey)
.setRealmEnabled(true)
.build();
}
private DecisionTree createSimpleDecisionTree()
{
return new DecisionTree.Builder()
.setContext(RuntimeEnvironment.application)
.build();
}
}
I think that the problem is the following line:
#Rule
public PowerMockRule rule = new PowerMockRule();
However, if I remove this line I get the following error although The #PrepareForTest line is unchanged.:
org.powermock.api.mockito.ClassNotPreparedException:
The class io.realm.internal.RealmCore not prepared for test.
To prepare this class, add class to the '#PrepareForTest' annotation.
In case if you don't use this annotation, add the annotation on class or method level.
I guess you are using Jacoco to collect coverage. And PowerMock has a known issue with Jacoco/EclEmma. We're going to fix it in next release.
Update: PowerMock 1.6.6 has been released. It includes fix for JaCoCo offline instrumented code. So, now you may get code coverage with using JaCoCo maven plugin and offline instrumentation