DataSource attribute for unit test method - unit-testing

I try to use CSV data source in Device unit test (WinCE/Pocket PC2003 Emulator)
I have added source in using wizard in Data Connection String property:
using Microsoft.VisualStudio.TestTools.UnitTesting;
....
[TestMethod()]
[DeploymentItem("Options.txt")]
[DeploymentItem("Options_1.txt")]
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "C:\\...\\Tests\\Data\\LoadSettingsTest.csv", "LoadSettingsTest#csv", DataAccessMethod.Sequential)]
public void LoadSettingsTest()
{
...
}
I have following compiler error:
Error 1 The type or namespace name 'DataSource' could not be found
(are you missing a using directive or an assembly reference?)
Error 2 The type or namespace name 'DataSourceAttribute' could not be
found (are you missing a using directive or an assembly reference?)
What's? Where DataSource is defined? Is data DataSource attribute supported in device unit tests?

DataSource is not supported by Device Unit Testing Framework, see Unit Testing Framework (Devices).

It looks like you are trying to use a Comma Separated Values (CSV) file as a DataSource.
You should convert your CSV data into something a Data Control is familiar with first.
Here is an example:
private const string FIELD_SEP = "\t";
private const string LINE_BREAK = "\r\n";
private const int ONE_KB = 1024;
private System.Data.DataTable GenerateData(string csvDataFile) {
var sb = new System.Text.StringBuilder();
using (var file = File.Open(csvDataFile, FileMode.Open, FileAccess.Read)) {
byte[] buffer = new byte[ONE_KB];
int len = file.Read(buffer, 0, ONE_KB);
while (-1 < len) {
string oneK = Encoding.UTF8.GetString(buffer, 0, len);
sb.Append(oneK);
len = file.Read(buffer, 0, ONE_KB);
}
}
var table = new System.Data.DataTable();
var col1 = table.Columns.Add("ID", typeof(int));
var col2 = table.Columns.Add("Name", typeof(string));
var col3 = table.Columns.Add("Date", typeof(DateTime));
var col4 = table.Columns.Add("Cost", typeof(decimal));
var lines = sb.ToString().Split(LINE_BREAK.ToArray());
foreach (var line in lines) {
System.Data.DataRow row = table.NewRow();
var fields = line.Split(FIELD_SEP.ToArray());
row[col1] = int.Parse(fields[0]);
row[col2] = fields[1];
row[col3] = DateTime.Parse(fields[2]);
row[col4] = decimal.Parse(fields[3]);
table.Rows.Add(row);
}
return table;
}
Of course, I have no idea what kind of data you are trying to extract from this CSV file, and there is no error checking. If field[3] were an empty string, decimal.Parse(field[3]) would throw an exception.

Related

WebMethod returning multiple line (list<>) from SQL query in WCF with Entity Framework Linq

Creating a WCF in VisualStudio 2017, I need to create a WebMethod that will return multiple rows from a SQL query. Here is the code that is not working...
code
[WebMethod]
public List<TABLE_NAME> GetAllLineInfoDetailes(string OrderNumberSM)
{
string #OrdNumLG = OrderNumberSM;
List<TABLE_NAME> OrdNo = new List<TABLE_NAME>();
using (CONNECTION_NAME pubs = new CONNECTION_NAME())
{
var OrdNo_LINES = (from p in pubs.TABLE_NAME select p.OrderNumber == #OrdNumLG);
foreach (TABLE_NAME OrderLine in OrdNo_LINES)
{
TABLE_NAME a = new TABLE_NAME();
a.ItemNumber = OrderLine.ItemNumber;
a.LineNumber = OrderLine.LineNumber;
a.OrderNumber = OrderLine.OrderNumber;
OrdNo.Add(a);
}
}
return OrdNo;
}
code
The foreach is giving error "Cannot convert type 'bool' to 'CONNECTION_NAME.TABLE_NAME'"
Any help with this, or a better way to return the full result set, would be appreciated.
As the error says, it's a type conversion problem.
You need to
var OrdNo_LINES = (from p in pubs.TABLE_NAME select p.OrderNumber = #OrdNumLG);
replace
var OrdNo_LINES = (from p in pubs.TABLE_NAME select p.OrderNumber == #OrdNumLG);
Found what will work...
[WebMethod]
public List<TABLE_NAME> GetAllLineInfoDetailes(string OrderNumberSM)
{
string #OrdNumSM = OrderNumberSM;
using (CONNECTION_NAME pubs = new CONNECTION_NAME())
{
var query = (from c in pubs.TABLE_NAME
where c.OrderNumber == #OrdNumSM
orderby c.LineNumber ascending
select c).ToList();
return query;
}
}

Multiple unrelated dataViewMappings for custom visuals Power BI Report Server

There is a well-known problem connected with creating custom visuals for Power BI Report Sever - you can't create multiple unrelated dataViewMappings. It means that every field you put inside Fields must be related.
There have been a long-lasting request from users to add this feature.
However, it looks like Microsoft hasn't implemented it yet.
https://github.com/Microsoft/PowerBI-visuals/issues/251
I found a workaround - you can combine all your different tables in one and chain with a tableName key.
For example, you have a table A:
value1|value2|tableName
1|2|tableA
3|4|tableA
and a table B:
value3|tableName
a|tableB
b|tableB
You create a tableC by using M function Table.Combine({tableA, tableB}) or other technique. As a result, it looks like this:
value1|value2|value3|tableName
1|2|null|tableA
3|4|null|tableA
null|null|a|tableB
null|null|b|tableB
Then, inside your custom visual you implement the following function which parses the input:
public parseOptionsToTables(options: VisualUpdateOptions) {
var i: number;
var j: number;
var cntRows: number;
var cntCols: number;
var nameTable: string;
var tempDict: {} = {};
var tempVal: any;
var colName: string;
var cats: any = options.dataViews[0].table;
this.tables = {};
cntCols = cats.columns.length;
cntRows = cats.rows.length;
for (i = 0; i < cntRows; i++) {
tempVal = null;
colName = null;
nameTable = null;
tempDict = {};
for (j = 0; j < cntCols; j++) {
//for every row - check the TableName column and add to dict
tempVal = cats.rows[i][j];
colName = cats.columns[j].displayName;
if (colName == 'tableName') {
nameTable = tempVal;
} else {
//add id not null
if (tempVal !== null) {
tempDict[colName] = tempVal;
}
}
}
//push to dict
this.tables[nameTable] = this.tables[nameTable] || [];
this.tables[nameTable].push(this.deepcopy(tempDict));
}
}
As a result, it creates a this.tables object with your tableA and tableB inside.
Although it helps to solve the main problem, I can't get out of feeling of creating crutches.
So, are there any other approaches to solve this problem more effeciently?

how to set hash in Postman Pre-Request Script for Marvel API

I have a pre-request script that I gathered from another post on StackOverflow, but I'm still getting invalid credentials.
Attempted to do this just with str_1 but it's not working. Not sure what request.data is supposed to do as it keeps returning NaN. I think that the problem might be there, but still at a loss. I've attempted converting all variables to a string, but that still returned the same error.
URL = https://gateway.marvel.com/v1/public/characters?ts={{timeStamp}}&apikey={{apiKey}}&hash={{hash}}
// Access your env variables like this
var ts = new Date();
ts = ts.getUTCMilliseconds();
var str_1 = ts + environment.apiKey + environment.privateKey;
// Or get your request parameters
var str_2 = request.data["timeStamp"] + request.data["apiKey"];
console.log('str_2 = ' + str_2);
// Use the CryptoJS
var hash = CryptoJS.MD5(str_1).toString();
// Set the new environment variable
pm.environment.set('timeStamp', ts);
pm.environment.set('hash', hash);
{
"code": "InvalidCredentials",
"message": "That hash, timestamp and key combination is invalid."
}
If someone can comment on why this is the solution, I would appreciate it. Here is what the issue was. The order of the hash actually matters. So had to flip the order of pvtkey + pubkey to pubkey + pvtkey. Why is this?
INCORRECT
var message = ts+pubkey+pvtkey;
var a = CryptoJS.MD5(message);
pm.environment.set("hash", a.toString());
CORRECT
var message = ts+pvtkey+pubkey;
var a = CryptoJS.MD5(message);
pm.environment.set("hash", a.toString());
I created in Android Studio, a new java class named MD5Hash, following the steps of https://javarevisited.blogspot.com/2013/03/generate-md5-hash-in-java-string-byte-array-example-tutorial.html
I just simplified his (her) code, only to use it with Java utility MessageDigest
public class MD5Hash {
public static void main(String args[]) {
String publickey = "abcdef"; //your api key
String privatekey = "123456"; //your private key
Calendar calendar=Calendar.getInstance();
String stringToHash = calendar
.getTimeInMillis()+ privatekey + publickey;
System.out.println("hash : " + md5Java(stringToHash));
System.out.println("ts : "+ calendar.getTimeInMillis());
}
public static String md5Java(String message){
String digest = null;
try {
MessageDigest md = MessageDigest.getInstance("MD5");
byte[] hash = md.digest(message.getBytes("UTF-8"));
//converting byte array to Hexadecimal String
StringBuilder sb = new StringBuilder(2*hash.length);
for(byte b : hash){
sb.append(String.format("%02x", b&0xff));
}
digest = sb.toString();
} catch (UnsupportedEncodingException ex) {
} catch (NoSuchAlgorithmException ex) {
}
return digest;
}
}
As you can see, if you copy paste this code, it has a green arrow on the left side of the class declaration, clicking it you can run MD5Hash.main() and you'll have printed in your Run Screen the values for the time (ts) and for the hash.
Then go to verify directly into the internet :
https://gateway.marvel.com/v1/public/characters?limit=20&ts=1574945782067&apikey=abcdef&hash=4bbb5dtf899th5132hjj66

Running BeamSql WithoutCoder or Making Coder Dynamic

I am reading data from file and converting it to BeamRecord But While i am Doing Query on that it Show Error-:
Exception in thread "main" java.lang.ClassCastException: org.apache.beam.sdk.coders.SerializableCoder cannot be cast to org.apache.beam.sdk.coders.BeamRecordCoder
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.registerTables(BeamSql.java:173)
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.expand(BeamSql.java:153)
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.expand(BeamSql.java:116)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:533)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:465)
at org.apache.beam.sdk.values.PCollectionTuple.apply(PCollectionTuple.java:160)
at TestingClass.main(TestingClass.java:75)
But When I am Providing it a Coder Then It Runs Perfectly.
I am little confused that if I am reading data from a file the file data schema changes on every run because I am using templates so is there any way I can use Default Coder or Without Coder, i can Run the Query.
For Reference Code is Below Please Check.
PCollection<String> ReadFile1 = PBegin.in(p).apply(TextIO.read().from("gs://Bucket_Name/FileName.csv"));
PCollection<BeamRecord> File1_BeamRecord = ReadFile1.apply(new StringToBeamRecord()).setCoder(new Temp().test().getRecordCoder());
PCollection<String> ReadFile2= p.apply(TextIO.read().from("gs://Bucket_Name/FileName.csv"));
PCollection<BeamRecord> File2_Beam_Record = ReadFile2.apply(new StringToBeamRecord()).setCoder(new Temp().test1().getRecordCoder());
new Temp().test1().getRecordCoder() --> Returning HardCoded BeamRecordCoder Values Which I need to fetch at runtime
Conversion From PColletion<String> to PCollection<TableRow> is Below-:
Public class StringToBeamRecord extends PTransform<PCollection<String>,PCollection<BeamRecord>> {
private static final Logger LOG = LoggerFactory.getLogger(StringToBeamRecord.class);
#Override
public PCollection<BeamRecord> expand(PCollection<String> arg0) {
return arg0.apply("Conversion",ParDo.of(new ConversionOfData()));
}
static class ConversionOfData extends DoFn<String,BeamRecord> implements Serializable{
#ProcessElement
public void processElement(ProcessContext c){
String Data = c.element().replaceAll(",,",",blank,");
String[] array = Data.split(",");
List<String> fieldNames = new ArrayList<>();
List<Integer> fieldTypes = new ArrayList<>();
List<Object> Data_Conversion = new ArrayList<>();
int Count = 0;
for(int i = 0 ; i < array.length;i++){
fieldNames.add(new String("R"+Count).toString());
Count++;
fieldTypes.add(Types.VARCHAR); //Using Schema I can Set it
Data_Conversion.add(array[i].toString());
}
LOG.info("The Size is : "+Data_Conversion.size());
BeamRecordSqlType type = BeamRecordSqlType.create(fieldNames, fieldTypes);
c.output(new BeamRecord(type,Data_Conversion));
}
}
}
Query is -:
PCollectionTuple test = PCollectionTuple.of(
new TupleTag<BeamRecord>("File1_BeamRecord"),File1_BeamRecord)
.and(new TupleTag<BeamRecord>("File2_BeamRecord"), File2_BeamRecord);
PCollection<BeamRecord> output = test.apply(BeamSql.queryMulti(
"Select * From File1_BeamRecord JOIN File2_BeamRecord "));
Is thier anyway i can make Coder Dynamic or I can Run Query with Default Coder.

subsonic 3.0 active record update

I am able to retrieve database values and insert database values, but I can't figure out what the Update() syntax should be with a where statement.
Environment -> ASP.Net, C#
Settings.ttinclude
const string Namespace = "subsonic_db.Data";
const string ConnectionStringName = "subsonic_dbConnectionString";
//This is the name of your database and is used in naming
//the repository. By default we set it to the connection string name
const string DatabaseName = "subsonic_db";
Retreive example
var product = equipment.SingleOrDefault(x => x.id == 1);
Insert Example
equipment my_equipment = new equipment();
try
{
// insert
my_equipment.parent_id = 0;
my_equipment.primary_id = 0;
my_equipment.product_code = product_code.Text;
my_equipment.product_description = product_description.Text;
my_equipment.product_type_id = Convert.ToInt32(product_type_id.SelectedItem.Value);
my_equipment.created_date = DateTime.Now;
my_equipment.serial_number = serial_number.Text;
my_equipment.Save();
}
catch (Exception err)
{
lblError.Text = err.Message;
}
Edit: Think that I was just too tired last night, it is pretty easy to update. Just use the retrieve function and use the Update() on that.
var equip = Equipment.SingleOrDefault(x => x.id == 1);
lblGeneral.Text = equip.product_description;
equip.product_description = "Test";
equip.Update();
Resolved. View answer above.