Create a JPA Named Query with conditional where clause based on user value selections - jpa-2.0

I am rewriting an existing application and am required to use the existing application queries using JPA 2. How do I write a Named Query that replaces a dynamic query that will add sections to the where clause based on a value selection. In other words, if a value is selected from a drop down box, I will add a AND/OR section to the where clause. If not, that where clause section is omitted. For example:
int paramIdx = 0;
String query = "SELECT * FROM ANIMAL_TABLE WHERE...something";
String whereClause = "";
if(canine!= null and canine.trim().length() > 0) {
whereClause += " AND canine_type = :" + (paramIdx + 1) + viewObject.setWhereClauseParam(paramIdx, canine);
paramIdx++;
}
if(cat != null and cat.trim().length() > 0) {
whereClause += " AND cat_type = :" + (paramIdx + 1) + viewObject.setWhereClauseParam(paramIdx, cat);
}
I'm wondering if there is a way to accomplish this using JPA. Any ideas or suggestions would be helpful.

Related

Does QT have reflections for C++?

I want to create a SQL table in QT C++. So I have made this code.
And it is going to create a database for me, where the first argument tableName is the name of the table I want to create. Then the next argument is quite tricky.
Here, columns, specify the column name and it's data type. I think this is a bad way to do. Example
QVector<QPair<QString, QString>> myColumns = new QVector<QPair<QString, QString>>{{"ID", "BIGINT"}, {"logging_id", "INT"}};
Because If i have for example like 50 columns. The myColumns is going to be very long.
My question if QT C++ have some kind of reflections, so I can:
Get the name if every field
Get the data type of every field
If the field is an array, then I'm going to know how many elements there are inside that array
I was planning to have an entity class where I create a class, and use that class to get the information to create each columns in the database.
void Database::createTable(QString tableName, const QVector<QPair<QString, QString>> columns){
QSqlQuery query;
for (int i = 0; i < columns.length(); i++){
/* Get the Qpair values */
QString columnName = columns.at(i).first;
QString dataType = columns.at(i).second;
/* If key is ID, then try to create a new table */
if(columnName == "ID"){
query.exec("CREATE TABLE " + tableName + "(" + columnName + " " + dataType + " NOT NULL AUTO_INCREMENT PRIMARY KEY)");
continue;
}
/* If not, then try append new columns */
query.exec("ALTER TABLE " + tableName + " ADD " + columnName + " " + dataType);
}
}

ASP NET MVC Core 2 with entity framework saving primery key Id to column that isnt a relationship column

Please look at this case:
if (product.ProductId == 0)
{
product.CreateDate = DateTime.UtcNow;
product.EditDate = DateTime.UtcNow;
context.Products.Add(product);
if (!string.IsNullOrEmpty(product.ProductValueProduct)) { SaveProductValueProduct(product); }
}
context.SaveChanges();
When I debug code the ProductId is negative, but afer save the ProductId is corect in database (I know this is normal), but when I want to use it here: SaveProductValueProduct(product) after add to context.Products.Add(product); ProductId behaves strangely. (I'm creating List inside SaveProductValueProduct)
List<ProductValueProductHelper> pvpListToSave = new List<ProductValueProductHelper>();
foreach (var item in dProductValueToSave)
{
ProductValueProductHelper pvp = new ProductValueProductHelper();
pvp.ProductId = (item.Value.HasValue ? item.Value : product.ProductId).GetValueOrDefault();
pvp.ValueId = Convert.ToInt32(item.Key.Remove(item.Key.IndexOf(",")));
pvp.ParentProductId = (item.Value.HasValue ? product.ProductId : item.Value);
pvpListToSave.Add(pvp);
}
I want to use ProductId for relationship. Depends on logic I have to save ProductId at ProductId column OR ProductId at ParentProductId column.
The ProductId 45 save corent at ProductId column (3rd row - this is a relationship), BUT not at ParentProductId (2nd row - this is just null able int column for app logic), although while debug I pass the same negative value from product.ProductId there.
QUESTION:
how can I pass correct ProductId to column that it isn't a relationship column
I just use SaveProductValueProduct(product) after SaveChanges (used SaveChanges twice)
if (product.ProductId == 0)
{
product.CreateDate = DateTime.UtcNow;
product.EditDate = DateTime.UtcNow;
context.Products.Add(product);
}
context.SaveChanges();
if (product.ProductId != 0)
{
if (!string.IsNullOrEmpty(product.ProductValueProduct)) { SaveProductValueProduct(product); }
context.SaveChanges();
}
I set this as an answer because it fixes the matter, but I am very curious if there is any other way to solve this problem. I dont like to use context.SaveChanges(); one after another.

How to do multiple parallel readers for data export using Google Spanner?

External Backups/Snapshots for Google Cloud Spanner recommends to use queries with timestamp bounds to create snapshots for export. On the bottom of the Timestamp Bounds documentation it states:
Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as version GC. By default, version GC reclaims versions after they are one hour old. Because of this, Cloud Spanner cannot perform reads at a read timestamp more than one hour in the past.
So any export would need to complete within an hour. A single reader (i.e. select * from table; using timestamp X) would not be able to export the entire table within an hour.
How can multiple parallel readers be implemented in spanner?
Note: It is mentioned in one of the comments that support for Apache Beam is coming, but it looks like that uses a single reader:
/** A simplest read function implementation. Parallelism support is coming. */
https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/spanner/NaiveSpannerReadFn.java#L26
Is there a way to do the parallel reader that beam requires today using exising APIs? Or will Beam need to use something that isn't released yet on google spanner?
It is possible to read data in parallel from Cloud Spanner with the BatchClient class. Follow read_data_in_parallel for more information.
If you are looking to export data from Cloud Spanner, I'd recommend you to use Cloud Dataflow (see the integration details here) as it provides higher level abstractions and takes care data processing details, like scaling and failure handling.
Edit 2018-03-30 - The example project has been updated to use the BatchClient offered by Google Cloud Spanner
After the release of the BatchClient for reading/downloading large amounts of data, the example project below has been updated to use the new batch client instead of the standard database client. The basic idea behind the project is still the same: Copy data to/from Cloud Spanner and any other database using standard jdbc functionality. The following code snippet sets the jdbc connection in batch read mode:
if (source.isWrapperFor(ICloudSpannerConnection.class))
{
ICloudSpannerConnection con = source.unwrap(ICloudSpannerConnection.class);
// Make sure no transaction is running
if (!con.isBatchReadOnly())
{
if (con.getAutoCommit())
{
con.setAutoCommit(false);
}
else
{
con.commit();
}
con.setBatchReadOnly(true);
}
}
When the connection is in 'batch read only mode', the connection will use the BatchClient of Google Cloud Spanner instead of the standard database client. When one of the Statement#execute(String) or PreparedStatement#execute() methods are called (as these allow multiple result sets to be returned) the jdbc driver will create a partitioned query instead of a normal query. The results of this partitioned query will be a number of result sets (one per partition) that can be fetched by the Statement#getResultSet() and Statement#getMoreResults(int) methods.
Statement statement = source.createStatement();
boolean hasResults = statement.execute(select);
int workerNumber = 0;
while (hasResults)
{
ResultSet rs = statement.getResultSet();
PartitionWorker worker = new PartitionWorker("PartionWorker-" + workerNumber, config, rs, tableSpec, table, insertCols);
workers.add(worker);
hasResults = statement.getMoreResults(Statement.KEEP_CURRENT_RESULT);
workerNumber++;
}
The result sets that are returned by the Statement#execute(String) are not executed directly, but only after the first call to ResultSet#next(). Passing these result sets to separate worker threads ensures parallel download and copying of the data.
Original answer:
This project was initially created for conversion in the other direction (from a local database to Cloud Spanner), but as it uses JDBC for both source and destination it can also be used the other way around: Converting a Cloud Spanner database to a local PostgreSQL database. Large tables are converted in parallel using a thread pool.
The project uses this open source JDBC driver instead of the JDBC driver supplied by Google. The source Cloud Spanner JDBC connection is set to read-only mode and autocommit=false. This ensures that the connection automatically creates a read-only transaction using the current time as timestamp the first time you execute a query. All subsequent queries within the same (read-only) transaction will use the same timestamp giving you a consistent snapshot of your Google Cloud Spanner database.
It works as follows:
Set the source database to read-only transactional mode.
The convert(String catalog, String schema) method iterates over all
tables in the source database (Cloud Spanner)
For each table the number of records is determined, and depending on the size of the table, the table is copied using either the main thread of the application or by a worker pool.
The class UploadWorker is responsible for the parallel copying. Each worker is assigned a range of records from the table (for example rows 1 to 2,400). The range is selected by a select statement in this format: 'SELECT * FROM $TABLE ORDER BY $PK_COLUMNS LIMIT $BATCH_SIZE OFFSET $CURRENT_OFFSET'
Commit the read-only transaction on the source database after ALL tables have been converted.
Below is a code snippet of the most important parts.
public void convert(String catalog, String schema) throws SQLException
{
int batchSize = config.getBatchSize();
destination.setAutoCommit(false);
// Set the source connection to transaction mode (no autocommit) and read-only
source.setAutoCommit(false);
source.setReadOnly(true);
try (ResultSet tables = destination.getMetaData().getTables(catalog, schema, null, new String[] { "TABLE" }))
{
while (tables.next())
{
String tableSchema = tables.getString("TABLE_SCHEM");
if (!config.getDestinationDatabaseType().isSystemSchema(tableSchema))
{
String table = tables.getString("TABLE_NAME");
// Check whether the destination table is empty.
int destinationRecordCount = getDestinationRecordCount(table);
if (destinationRecordCount == 0 || config.getDataConvertMode() == ConvertMode.DropAndRecreate)
{
if (destinationRecordCount > 0)
{
deleteAll(table);
}
int sourceRecordCount = getSourceRecordCount(getTableSpec(catalog, tableSchema, table));
if (sourceRecordCount > batchSize)
{
convertTableWithWorkers(catalog, tableSchema, table);
}
else
{
convertTable(catalog, tableSchema, table);
}
}
else
{
if (config.getDataConvertMode() == ConvertMode.ThrowExceptionIfExists)
throw new IllegalStateException("Table " + table + " is not empty");
else if (config.getDataConvertMode() == ConvertMode.SkipExisting)
log.info("Skipping data copy for table " + table);
}
}
}
}
source.commit();
}
private void convertTableWithWorkers(String catalog, String schema, String table) throws SQLException
{
String tableSpec = getTableSpec(catalog, schema, table);
Columns insertCols = getColumns(catalog, schema, table, false);
Columns selectCols = getColumns(catalog, schema, table, true);
if (insertCols.primaryKeyCols.isEmpty())
{
log.warning("Table " + tableSpec + " does not have a primary key. No data will be copied.");
return;
}
log.info("About to copy data from table " + tableSpec);
int batchSize = config.getBatchSize();
int totalRecordCount = getSourceRecordCount(tableSpec);
int numberOfWorkers = calculateNumberOfWorkers(totalRecordCount);
int numberOfRecordsPerWorker = totalRecordCount / numberOfWorkers;
if (totalRecordCount % numberOfWorkers > 0)
numberOfRecordsPerWorker++;
int currentOffset = 0;
ExecutorService service = Executors.newFixedThreadPool(numberOfWorkers);
for (int workerNumber = 0; workerNumber < numberOfWorkers; workerNumber++)
{
int workerRecordCount = Math.min(numberOfRecordsPerWorker, totalRecordCount - currentOffset);
UploadWorker worker = new UploadWorker("UploadWorker-" + workerNumber, selectFormat, tableSpec, table,
insertCols, selectCols, currentOffset, workerRecordCount, batchSize, source,
config.getUrlDestination(), config.isUseJdbcBatching());
service.submit(worker);
currentOffset = currentOffset + numberOfRecordsPerWorker;
}
service.shutdown();
try
{
service.awaitTermination(config.getUploadWorkerMaxWaitInMinutes(), TimeUnit.MINUTES);
}
catch (InterruptedException e)
{
log.severe("Error while waiting for workers to finish: " + e.getMessage());
throw new RuntimeException(e);
}
}
public class UploadWorker implements Runnable
{
private static final Logger log = Logger.getLogger(UploadWorker.class.getName());
private final String name;
private String selectFormat;
private String sourceTable;
private String destinationTable;
private Columns insertCols;
private Columns selectCols;
private int beginOffset;
private int numberOfRecordsToCopy;
private int batchSize;
private Connection source;
private String urlDestination;
private boolean useJdbcBatching;
UploadWorker(String name, String selectFormat, String sourceTable, String destinationTable, Columns insertCols,
Columns selectCols, int beginOffset, int numberOfRecordsToCopy, int batchSize, Connection source,
String urlDestination, boolean useJdbcBatching)
{
this.name = name;
this.selectFormat = selectFormat;
this.sourceTable = sourceTable;
this.destinationTable = destinationTable;
this.insertCols = insertCols;
this.selectCols = selectCols;
this.beginOffset = beginOffset;
this.numberOfRecordsToCopy = numberOfRecordsToCopy;
this.batchSize = batchSize;
this.source = source;
this.urlDestination = urlDestination;
this.useJdbcBatching = useJdbcBatching;
}
#Override
public void run()
{
// Connection source = DriverManager.getConnection(urlSource);
try (Connection destination = DriverManager.getConnection(urlDestination))
{
log.info(name + ": " + sourceTable + ": Starting copying " + numberOfRecordsToCopy + " records");
destination.setAutoCommit(false);
String sql = "INSERT INTO " + destinationTable + " (" + insertCols.getColumnNames() + ") VALUES \n";
sql = sql + "(" + insertCols.getColumnParameters() + ")";
PreparedStatement statement = destination.prepareStatement(sql);
int lastRecord = beginOffset + numberOfRecordsToCopy;
int recordCount = 0;
int currentOffset = beginOffset;
while (true)
{
int limit = Math.min(batchSize, lastRecord - currentOffset);
String select = selectFormat.replace("$COLUMNS", selectCols.getColumnNames());
select = select.replace("$TABLE", sourceTable);
select = select.replace("$PRIMARY_KEY", selectCols.getPrimaryKeyColumns());
select = select.replace("$BATCH_SIZE", String.valueOf(limit));
select = select.replace("$OFFSET", String.valueOf(currentOffset));
try (ResultSet rs = source.createStatement().executeQuery(select))
{
while (rs.next())
{
int index = 1;
for (Integer type : insertCols.columnTypes)
{
Object object = rs.getObject(index);
statement.setObject(index, object, type);
index++;
}
if (useJdbcBatching)
statement.addBatch();
else
statement.executeUpdate();
recordCount++;
}
if (useJdbcBatching)
statement.executeBatch();
}
destination.commit();
log.info(name + ": " + sourceTable + ": Records copied so far: " + recordCount + " of "
+ numberOfRecordsToCopy);
currentOffset = currentOffset + batchSize;
if (recordCount >= numberOfRecordsToCopy)
break;
}
}
catch (SQLException e)
{
log.severe("Error during data copy: " + e.getMessage());
throw new RuntimeException(e);
}
log.info(name + ": Finished copying");
}
}

Faceted search on multilist in sitecore 7

I'm able to generate the facet and bind the result in Linkbutton within repeater control. Now I'm facing problem in generating the facet query with OR operator when the user selects more than 1 value of same facet type in Sitecore 7.
What can be done to solve it?
Thanks
Here's a blog post about solving this using PredicateBuilder with a specific code example:
http://www.nttdatasitecore.com/en/Blog/2013/November/Building-Facet-Queries-with-PredicateBuilder.aspx
To implement the Facet search in Sitecore use PredicateBuilder class to build the filter query and then add the filter query to the base query. Code is mentioned below:
List<PeopleFields> objPeoplefields = new List<PeopleFields>();
IQueryable<PeopleFields> Query = null;
var predicatePractice = Sitecore.ContentSearch.Utilities.PredicateBuilder.False<PeopleFields>();
var predicateOffice = Sitecore.ContentSearch.Utilities.PredicateBuilder.False<PeopleFields>();
using (var context = ContentSearchManager.GetIndex(SITECORE_WEB_INDEX).CreateSearchContext())
{
//Base query
Query = context.GetQueryable<PeopleFields>().Where(i => i.FirstName.StartsWith(txtFirstName.Text)).Where(i => i.LastName.StartsWith(txtLastName.Text));
foreach (string strselecteFacet in lstPracticefacetSelected)
{
//filter query
predicatePractice = predicatePractice.Or(x => x.Practice.Like(strselecteFacet));
}
foreach (string strselecteFacet in lstOfficefacetSelected)
{
//Filter query
predicateOffice = predicateOffice.Or(x => x.Office.Like(strselecteFacet));
}
//Joining the filter query alongwith base query
if (lstPracticefacetSelected.Count > 0 && lstOfficefacetSelected.Count > 0)
Query = Query.Filter(predicatePractice).Filter(predicateOffice);
else if (lstPracticefacetSelected.Count > 0 && lstOfficefacetSelected.Count == 0)
Query = Query.Filter(predicatePractice);
else if (lstPracticefacetSelected.Count == 0 && lstOfficefacetSelected.Count > 0)
Query = Query.Filter(predicateOffice);
objPeoplefields = Query.ToList();
}
As a quick way, if you're ok with having SitecoreUISearchResultItem as the query result type, you may be able to utilize the same method Sitecore 7 uses to parse queries entered in the content editor search:
Sitecore.Buckets.Util.UIFilterHelpers.ParseDatasourceString(string query)
If that's not up to what you're after, reading how it's implemented with a decompiler (ILSpy, DotPeek, Reflector, Resharper, etc) may help you in composing an expression manually based on dynamic criteria.

Opening Rich Text Editor in custom field of Sitecore Content Editor

I'm implementing a custom field in Sitecore for the Content Editor, and I need to be able to open the Rich Text editor and get the data from there. I'm not really sure where to look though, nor how to go about it.
Had to decompile the Sitecore.Kernel DLL in order to figure this out.
First thing is to spin off a call from the Context.ClientPage object
So, for my situation:
switch (message.Name)
{
case "richtext:edit":
Sitecore.Context.ClientPage.Start(this, "EditText");
break;
}
You will then need to have a method in your class with the same name as defined in the above Start method. Then, you either start the rich text control if the request isn't a postback, or handle the posted data
protected void EditText(ClientPipelineArgs args)
{
Assert.ArgumentNotNull(args, "args");
if (args.IsPostBack)
{
if (args.Result == null || args.Result == "undefined")
return;
var text = args.Result;
if (text == "__#!$No value$!#__")
text = string.Empty;
Value = text;
UpdateHtml(args); //Function that executes Javascript to update embedded rich text frame
}
else
{
var richTextEditorUrl = new RichTextEditorUrl
{
Conversion = RichTextEditorUrl.HtmlConversion.DoNotConvert,
Disabled = Disabled,
FieldID = FieldID,
ID = ID,
ItemID = ItemID,
Language = ItemLanguage,
Mode = string.Empty,
Source = Source,
Url = "/sitecore/shell/Controls/Rich Text Editor/EditorPage.aspx",
Value = Value,
Version = ItemVersion
};
UrlString url = richTextEditorUrl.GetUrl();
handle = richTextEditorUrl.Handle;
ID md5Hash = MainUtil.GetMD5Hash(Source + ItemLanguage);
SheerResponse.Eval("scContent.editRichText(\"" + url + "\", \"" + md5Hash.ToShortID() + "\", " +
StringUtil.EscapeJavascriptString(GetDeviceValue(CurrentDevice)) + ")");
args.WaitForPostBack();
}