WSO2 Siddhi standalone library - No extension exist for :timeBatch - wso2

I´m trying to implement a wso2 siddhi library standalone solution into an osgi environment. Apart from using windows, it works so far. When I define a stream or a window for aggregations, I get the error message "No extension exist for :timeBatch". But timeBatch is shipped with Siddhi, so this message doesn't make sense. The libraries siddhi-core, siddhi-annotations, siddhi-query-api, siddhi-query-compiler are all required-plugins and queries without windows are built without problems.
Is there anything else I might have forgotten?
String espEventStream =
"define Stream EspStream (temperature float, humidity float, brightness float); " +
"define window EspStreamWindow (temperature float, humidity float, brightness float) timeBatch(5sec); " +
" " +
"#info(name = 'query0') " +
"from EspStream " +
"insert into EspStreamWindow; "+
"#info(name = 'query1') " +
"from EspStreamWindow "+
"select avg(brightness) as avgBrightness, min(brightness) as minBrightness, max(brightness) as maxBrightness " +
"insert into EspStreamOut ;";
SiddhiManager siddhiManager = new SiddhiManager();
SiddhiAppRuntime siddhiAppRuntime = siddhiManager.createSiddhiAppRuntime(espEventStream);
Edit: It works if I pack all dependencies into one single bundle and use this instead of using every dependency as a seperate bundle.

Related

Running arbitrary SQL commands MySQL C++ (X DevAPI)?

I've connected my C++ project to MySQL and successfully created a session. I was able to create a Schema. My issue is that when I try to run simple arbitrary queries like USE testSchema SHOW tables; using the MySQL/C++ api, I run into SQL syntax errors. When I run the function directly in the MySQL shell, the query runs perfectly fine.
Here is the full code
const char* url = (argc > 1 ? argv[1] : "mysqlx://pct#127.0.0.1");
cout << "Creating session on " << url << " ..." << endl;
Session sess(url);
{
cout << "Connected!" << endl;
// Create the Schema "testSchema"; This code creates a schema without issue
cout << "Creating Schema..." << endl;
sess.dropSchema("testSchema");
Schema mySchema = sess.createSchema("testSchema");
cout << "Schema Created!" << endl;
// Create the Table "testTable"; This code runs like normal, but the schema doesn't show
cout << "Creating Table with..." << endl;
SqlStatement sqlcomm = sess.sql("USE testSchema SHOW tables;");
sqlcomm.execute();
}
Here is the console output:
Creating session on mysqlx://pct#127.0.0.1 ...
Connected!
Creating Schema...
Schema Created!
Creating Table with...
MYSQL ERROR: CDK Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SHOW tables' at line 1
The error You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SHOW tables' at line 1 is a MySQL error that means I have a syntax error in the query, but when I take a closer look at a query, I see there is nothing wrong with it.
I've copied and pasted the code directly from the cpp file into the mysql shell and it runs perfectly. This tells me that something is up with the formatting of how I'm entering the query in the sql() function. But the documentation for the sql() function is really terse and.
Here is the reference to the sql() function: https://dev.mysql.com/doc/dev/connector-cpp/8.0/class_session.html#a2e625b5223acd2a3cbc5c02d653a1426
Can someone please give me some insight on where I'm going wrong? Also here here is the full cpp code for more context:https://pastebin.com/3kQY8THC
Windows 10
Visual Studio 2019
MySQL 8.0 with Connect/C++ X DevAPI
You can do it in two steps:
sess.sql("USE testSchema").execute();
SqlStatement sqlcomm = sess.sql("SHOW tables");
SqlResult res = sqlcomm.execute();
for(auto row : res)
{
std::cout << row.get(0).get<std::string>() << std::endl;
}
Also, you can use the Schema::getTables():
for(auto table : mySchema.getTables())
{
std::cout << table.getName() << std::endl;
}
Keep in mind that the Schema::getTables() doesn't show the Collections created by Schema::createCollection(). There is also a Schema::getCollections():
for(auto collection : mySchema.getCollections())
{
std::cout << collection.getName() << std::endl;
}

Multiple cloud-sql table export as csv

Is there a way to export multiple SQL tables as csv by issuing specific queries from cloud-sql.
Below is the code i currently have. When I call the exportTables for multiple tables back to back, I see a 409 error. It's probably becaause cloud-sql instance is busy with an export and it's not allowing subsequent export request.
How can I get this to work ? What would be the ideal solution here.
private void exportTables(String table_name, String query)
throws IOException, InterruptedException {
HttpClient httpclient = new HttpClient();
PostMethod httppost =
new PostMethod(
"https://www.googleapis.com/sql/v1beta4/projects/"
+ "abc"
+ "/instances/"
+ "zxy"
+ "/export");
String destination_bucket =
String.join(
"/",
"gs://" + "test",
table_name,
DateTimeUtil.getCurrentDate() + ".csv");
GoogleCredentials credentials =
GoogleCredentials.getApplicationDefault().createScoped(SQLAdminScopes.all());
AccessToken access_token = credentials.refreshAccessToken();
access_token.getTokenValue();
httppost.addRequestHeader("Content-Type", "application/json");
httppost.addRequestHeader("Authorization", "Bearer " + access_token.getTokenValue());
String request =
"{"
+ " \"exportContext\": {"
+ " \"fileType\": \"CSV\","
+ " \"uri\":\""
+ destination_bucket
+ "\","
+ " \"databases\": [\""
+ "xyz"
+ "\"],"
+ " \"csvExportOptions\": {"
+ " \"selectQuery\": \""
+ query
+ "\""
+ " }\n"
+ " }"
+ "}";
httppost.setRequestEntity(new StringRequestEntity(request, "application/json", "UTF-8"));
httpclient.executeMethod(httppost);
if (httppost.getStatusCode() > 200) {
String response = new String(httppost.getResponseBody(), StandardCharsets.UTF_8);
if (httppost.getStatusCode() != 409) {
throw new RuntimeException(
"Exception occurred while exporting the table: " + table_name + " Error " + response);
} else {
throw new IOException("SQL instance seems to be busy at the moment. Please retry");
}
}
httppost.releaseConnection();
logger.info("Finished exporting table {} to {}", table_name, destination_bucket);
}
I don't have suggestion to fix the issue on Cloud SQL directly, but a solution to execute in sequence the export thanks to a new tool: Workflow
Define the data format that you want, in JSON, to define ONE export.
Then provide an array of configuration to your workflow
In this workflow,
Make a loops on the configuration array
Perform an API call to Cloud SQL to generate the export on each configuration
Get the answer of the API Call, you have the jobId
Sleep a while
Check if the export is over (with the jobId).
If not, sleep and check again
If yes, loop (and thus start the next export)
It's serverless and the free tier makes this use case free.

get serviceInstanceName and serviceKeyName for cloud foundry user provider serves by using cloudFoundryOperations

im trying to get the Credentials of UPS in cloud foundry:
using:
Mono<ServiceKey> serviceKey = (Mono<ServiceKey>) cloudFoundryOperations
.services()
.getServiceKey(
GetServiceKeyRequest.builder()
.serviceKeyName("digital_cassandra")
.serviceInstanceName("2a5aa377-e992-4f88-9f85-d9cec5c3bea9")
.build())
.subscribe();
serviceKey.map(serviceKey1 -> {
System.out.println(serviceKey1.getCredentials().toString());
return serviceKey1.getCredentials().get(0);
});
but nothing printed.
how to fet the serviceKeyName and serviceInstanceName by cloudFoundryOperations?
i need to print all the serviceKeyName and serviceInstanceName in my space.
.serviceInstanceName("2a5aa377-e992-4f88-9f85-d9cec5c3bea9")
It should be the actual name, not the guid. Like "my-key" or whatever you called your key.
but nothing printed. how to fet the serviceKeyName and serviceInstanceName by cloudFoundryOperations?
If you just want to print to the console, try something like this:
cloudFoundryOperations
.services()
.getServiceKey(GetServiceKeyRequest.builder()
.serviceInstanceName("reservation-db")
.serviceKeyName("cf-mysql")
.build())
.doOnNext(key -> {
System.out.println("Key:");
System.out.println(" " + key.getName() + " (" + key.getId() + ")");
key.getCredentials().forEach((k, v) -> {
System.out.println(" " + k + " => " + v);
});
})
.block();
The GetServiceKeyRequest determines which service key is looked up. The doOnNext call allows you to inspect but not consume the key, which works fine to print it out. Then the example calls .block() to wait for the results, which is fine cause this is just an example. You wouldn't want to do that in your actual code though. You'd probably want one of the subscribe() variants (you could swap subscribe() for doOnNext() too, just depends on what you're code is doing).
i need to print all the serviceKeyName and serviceInstanceName in my space.
To get all the keys for all the service instances:
cloudFoundryOperations
.services()
.listInstances()
.doOnNext(si -> {
System.out.println(" " + si.getName() + " (" + si.getId() + ")");
})
.flatMap((ServiceInstanceSummary si) -> {
return ops
.services()
.listServiceKeys(ListServiceKeysRequest.builder()
.serviceInstanceName(si.getName())
.build())
.doOnNext(key -> {
System.out.println("Key:");
System.out.println(" " + key.getName() + " (" + key.getId() + ")");
key.getCredentials().forEach((k, v) -> {
System.out.println(" " + k + " => " + v);
});
});
})
.blockLast();
This one is enumerating all the service instances, printing the name/id, then using flatMap to go out and get the service keys for each service instance. It then merges them all into one Flux<ServiceKey>. The doOnNext() is just for printing. You don't necessarily have to do that. You could consume the result in a number of ways, like collect it into a list or subscribe to it, this just works nicely for an example. Use what works best for your code.

Set severity filter for sinks of a Boost.Log severity logger instance

I am using Boost.Log 1.55.0 in a project and I do want to change the severity filter for all sinks of a boost::log::sources::severity_logger instance.
Here's an example how-to setup one sink with an initial severity filter:
void InitializeLogging(LogLevels const kLogLevel) const {
auto line_id = boost::log::expressions::attr<unsigned int>("LineID");
auto severity = boost::log::expressions::attr<LogLevels>("Severity");
auto timestamp = boost::log::expressions::format_date_time<boost::posix_time::ptime>(
"TimeStamp",
"%Y-%m-%d %H:%M:%S");
boost::log::formatter const kFormatter{
boost::log::expressions::stream
<< std::setw(6) << std::setfill('0') << line_id
<< std::setfill(' ')
<< ": " << timestamp
<< " [" << severity << "] "
<< boost::log::expressions::smessage};
using TextSink = boost::log::sinks::synchronous_sink<boost::log::sinks::text_ostream_backend>;
boost::shared_ptr<TextSink> sink = boost::make_shared<TextSink>();
boost::shared_ptr<std::ostream> stream(&std::clog, boost::empty_deleter());
sink->locked_backend()->add_stream(stream);
sink->set_filter(severity >= kLogLevel);
sink->set_formatter(kFormatter);
boost::log::core::get()->add_sink(sink);
boost::log::add_common_attributes();
}
So I am able to setup the filtering when I create the sink via the member function set_filter, but I want to know how-to modify the filter of one/more/all sinks that are configured for the core of Boost.Log.
Is there any function I did not see yet to modify existing sinks?
If not, do I have to remove the sinks from the core and "re-create" them?
I found out the solution by myself. Therefore I will answer my two questions here:
Such a function does not exist, or I am unable to find it.
No, it is possible to set the filter for all sinks by calling the function set_filter on the core of Boost.Log. I am using the following function for my custom severity log levels:
void log_level(LogLevels const kLogLevel) {
boost::log::core::get()->set_filter(boost::log::expressions::attr<CustomLogLevels>("Severity") >= kLogLevel);
}
Sidenote: Removing all sinks from the core is possible by calling boost::log::core::get()->remove_all_sinks(), but not necessary in my use case.
I found the following to be a simple way to change the filter:
Given the following setup performed during log initialization:
boost::shared_ptr<file_sink> mSink;
mSink(new file_sink(keywords::file_name = "BoostLog_%Y%m%d_%3N.txt", keywords::rotation_size = 500 * 1024))
mSink->set_filter(severity >= error);
I logged lots of messages with the original filter, then I changed the filter to a new severity level with the following:
mSink->reset_filter();
mSink->set_filter(severity >= warning);
The logging then payed attention to the new filter setting.

Improving performance of WURFL code

I tested the following code which prints the properties of the UserAgent. However I notice that it takes noticeable time to execute the code.
// Initialize the WURFL library.
String projRoot = System.getProperty("user.dir");
wurflFile = projRoot + File.separator + "wurfl-2.3.3" + File.separator + "wurfl.xml";
File dataFile = new File(wurflFile);
wurfl = new CustomWURFLHolder(dataFile);
String deviceUrl = "Apple-iPhone5C1";
WURFLManager manager = wurfl.getWURFLManager();
Device device = manager.getDeviceForRequest(deviceUrl);
System.out.println("Device: " + device.getId());
System.out.println("Capability: " + device.getCapability("preferred_markup"));
System.out.println("Device UA: " + device.getUserAgent());
Map capabilities = device.getCapabilities();
System.out.println("Size of the map: " + capabilities.keySet().size());
Iterator itr = capabilities.keySet().iterator();
while (itr.hasNext()) {
String str = (String) itr.next();
System.out.println(str);
}
One of the reason is that it takes time to load and parse the WURFL XML database file (which is about 20MB size).
I want to know if there is any different WURFL API, which will improve this performance? Eventually, I would be putting this code in a HTTP Proxy where I want to check the device profile parameter for adaptation of the content.
Thanks.