How to configure openjpa logging in wso2as - wso2

I am trying to see the SQL generated by openjpa on WSO2AS 5.3.0. I tried:
- to update system.properties in /tomee
- add openjpa.Log to persistence.xml, using a resource_local transaction but also with jta, with a File attribute, or with log4j
- adding log4j.properties to /log4j.properties
No matter what I try, I see no output from openjpa!
Any ideas?

You can configure openJPA logging through the logging-brdige.properties file in the WSO2AS_Home/repository/conf/etc folder
Default levels are as follows
OpenEJB.level = WARNING
OpenEJB.options.level = WARNING
OpenEJB.server.level = WARNING
OpenEJB.startup.level = WARNING
OpenEJB.startup.service.level = WARNING
OpenEJB.startup.config.level = WARNING
OpenEJB.hsql.level = INFO
OpenEJB.rs.level = INFO
OpenEJB.ws.level = INFO
OpenEJB.tomcat.level = INFO
CORBA-Adapter.level = WARNING
Transaction.level = WARNING
org.apache.activemq.level = SEVERE
org.apache.geronimo.level = SEVERE
openjpa.level = WARNING
OpenEJB.cdi.level = WARNING
org.apache.webbeans.level = WARNING
org.apache.openejb.level = WARNING
You can refer to doc for more information
https://docs.wso2.com/display/AS530/Configure+Logging+using+Config+Files

It turns out that in logging-bridge.properties, the logging levels use the commons/jdk logging levels: so ALL / FINEST / FINER / FINE / CONFIG / INFO.
When changing the level for openjpa.jdbc.SQL.level=ALL and org.wso2.carbon.bootstrap.logging.handlers.LoggingConsoleHandler.level = ALL, it works.

Related

No FileSystem for scheme "s3" when trying to read a list of files with Spark from EC2

I'm trying to provide a list of files for spark to read as and when it needs them (which is why I'd rather not use boto or whatever else to pre-download all the files onto the instance and only then read them into spark "locally").
os.environ['PYSPARK_SUBMIT_ARGS'] = "--master local[3] pyspark-shell"
spark = SparkSession.builder.getOrCreate()
spark.sparkContext._jsc.hadoopConfiguration().set('fs.s3.access.key', credentials['AccessKeyId'])
spark.sparkContext._jsc.hadoopConfiguration().set('fs.s3.access.key', credentials['SecretAccessKey'])
spark.read.json(['s3://url/3521.gz', 's3://url/2734.gz'])
No idea what local[3] is about but without this --master flag, I was getting another exception:
Exception: Java gateway process exited before sending the driver its port number.
Now, I'm getting this:
Py4JJavaError: An error occurred while calling o37.json.
: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "s3"
...
Not sure what o37.json refers to here but it probably doesn't matter.
I saw a bunch of answers to similar questions suggesting an addition of flags like:
os.environ['PYSPARK_SUBMIT_ARGS'] = "--packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2 pyspark-shell"
I tried prepending it and appending it to the other flag but it doesn't work.
Just like the many variations I see in other answers and elsewhere on the internet (with different packages and versions), for example:
os.environ['PYSPARK_SUBMIT_ARGS'] = '--master local[*] --jars spark-snowflake_2.12-2.8.4-spark_3.0.jar,postgresql-42.2.19.jar,mysql-connector-java-8.0.23.jar,hadoop-aws-3.2.2,aws-java-sdk-bundle-1.11.563.jar'
A typical example for reading files from S3 is as below -
Additional you can go through this answer to ensure the minimalistic structure and necessary modules are in place -
java.io.IOException: No FileSystem for scheme: s3
Read Parquet - S3
os.environ['PYSPARK_SUBMIT_ARGS'] = "--packages=com.amazonaws:aws-java-sdk-bundle:1.11.375,org.apache.hadoop:hadoop-aws:3.2.0 pyspark-shell"
sc = SparkContext.getOrCreate()
sql = SQLContext(sc)
hadoop_conf = sc._jsc.hadoopConfiguration()
config = configparser.ConfigParser()
config.read(os.path.expanduser("~/.aws/credentials"))
access_key = config.get("****", "aws_access_key_id")
secret_key = config.get("****", "aws_secret_access_key")
session_key = config.get("****", "aws_session_token")
hadoop_conf.set("fs.s3.aws.credentials.provider", "org.apache.hadoop.fs.s3.TemporaryAWSCredentialsProvider")
hadoop_conf.set("fs.s3a.access.key", access_key)
hadoop_conf.set("fs.s3a.secret.key", secret_key)
hadoop_conf.set("fs.s3a.session.token", session_key)
s3_path = "s3a://xxxx/yyyy/zzzz/"
sparkDF = sql.read.parquet(s3_path)

How to set the syntax in pwndbg's context to AT&T?

By default it seems to use intel syntax
I tried with set disassembly-flavor at, but this only seems to affect the disassembly produced by the disassemble command, and not the one shown in the context window of pwndbg.
I apologize if this is not SO's worthy, but I really have tried looking online and nothing seems to show up.
Thank you,
I did a reverse engineer. I found the problem and the solution. Basically, the syntax flavor is hard-coded. I'm working in a pull request in the pwndbg project with my solution. Github: pwndbg/pwndbg ---> Pull request #863.
If you want to change it, you need root permissions and change a file in the /usr/share/pwndbg/ directory. Add the follow line cs.syntax = 2 in the disasm module.
Main path: /usr/share/pwndbg/
Relative path: ./pwndbg/disasm/__init__.py
File Path: /usr/share/pwndbg/pwndbg/disasm/__init__.py
#pwndbg.memoize.reset_on_objfile
def get_disassembler_cached(arch, ptrsize, endian, extra=None):
// Code...
cs = Cs(arch, mode)
cs.syntax = 2 // Add this line
cs.detail = True
return cs
The package capstone has these definitions:
# Capstone syntax value
CS_OPT_SYNTAX_DEFAULT = 0 # Default assembly syntax of all platforms (CS_OPT_SYNTAX)
CS_OPT_SYNTAX_INTEL = 1 # Intel X86 asm syntax - default syntax on X86 (CS_OPT_SYNTAX, CS_ARCH_X86)
CS_OPT_SYNTAX_ATT = 2 # ATT asm syntax (CS_OPT_SYNTAX, CS_ARCH_X86)
CS_OPT_SYNTAX_NOREGNAME = 3 # Asm syntax prints register name with only number - (CS_OPT_SYNTAX, CS_ARCH_PPC, CS_ARCH_ARM)
CS_OPT_SYNTAX_MASM = 4 # MASM syntax (CS_OPT_SYNTAX, CS_ARCH_X86)
add "set disassembly-flavor att" to ~/.gdbinit

Logrotation for flink not working with logrotate.d

I would like to enable log rotation for flink but didn't see an option to reload the process.
Tried to enable log rotation using "logrotate.d" with copytruncate option but leading to the creation of sparse files.
Is there any option to enable log rotation for flink taskmanager without restarting the process.
Have you tried something like inside log4j.properties in flink/conf installation? log4j.appender.file=org.apache.log4j.RollingFileAppender log4j.appender.file.File=${log.file} log4j.appender.file.MaxFileSize=1000MB log4j.appender.file.MaxBackupIndex=0 log4j.appender.file.append=false log4j.appender.file.layout=org.apache.log4j.PatternLayout log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
It has been a couple years since the original answer. I needed to do the same thing on EMR for Flink 1.13. I figure EMR is a common enough platform that I'd share what worked for me.
I believe the main difference between my answer and the earlier one is that Flink 1.13 uses Log4j 2. I suspect the earlier example is using Log4j version 1 because there's this line, for example:
log4j.appender.file.MaxBackupIndex=0
but in my code below, the same would be
appender.main.strategy.max = 0 (I actually use 1000 in the code, but wanted to show an apples-to-apples comparison)
That makes sense, because in Migration from log4j to log4j2:the equivalent attribute to MaxFileSize and MaxBackupIndex it says:
In Log4j 2 those values are associated with the triggering policy or RolloverStrategy
Anyhow, the configuration file on the EMR master node is /etc/flink/conf.dist/log4j.properties.
Here is the complete file contents (minus the boilerplate license at the top).
Each place I made a change to the content of the file to support log rolling has a rolling_file_change comment:
# This affects logging for both user code and Flink
rootLogger.level = INFO
rootLogger.appenderRef.file.ref = MainAppender
# Uncomment this if you want to _only_ change Flink's logging
#logger.flink.name = org.apache.flink
#logger.flink.level = INFO
# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
# Log all infos in the given file
appender.main.name = MainAppender
# rolling_file_change: original line
# appender.main.type = File
# rolling_file_change: replacement line
appender.main.type = RollingFile
appender.main.append = false
appender.main.fileName = ${sys:log.file}
# rolling_file_change: net new line
appender.main.filePattern = ${sys:log.file}.%i
appender.main.layout.type = PatternLayout
appender.main.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
# rolling_file_change: new block start
appender.main.policies.type = Policies
appender.main.policies.size.type = SizeBasedTriggeringPolicy
appender.main.policies.size.size=10MB
appender.main.strategy.type = DefaultRolloverStrategy
appender.main.strategy.max = 1000
# rolling_file_change: new block end
# Suppress the irrelevant (wrong) warnings from the Netty channel handler
logger.netty.name = org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline
logger.netty.level = OFF

P4Python Adding new file - [Error]: Null directory (//) not allowed in //workspaceXXXX//xxxx.txt

I am trying to add new to perforce. I getting error as below
[Error]: "Null directory (//) not allowed in '//workspaceXXXX//xxxx.txt'."
What i am doing:
1) I create a workspace
self.p4.client = "workspaceXXXX"
client = self.p4.fetch_client("workspaceXXXX")
client._root = "../temp/"workspaceXXXX""
depotPath = "//TEAM/PATH_1/PATH_2/2018-Aug-06/..."
wsPath = "//workspaceXXXX/..."
client._view = [depotPath+" "+wsPath]
"""Create a workspace"""
self.p4.save_client(client)
2) Place the new in the workspace
3) Add file
self.p4.run("add", "//TEAM/PATH_1/PATH_2/2018-Aug-06/xxxx.txt")
While adding file i encounter the above error. What am i doing wrong here.

Datapath#ports is kept for compatibility

I am trying to get ryu to run, especially the topology discovery.
Now I am running the demo application for that under ryu/topology/dumper.py, which is supposed to dump all topology events. I am in the ryu/topology direcory and run it using ryu-manager dumper.py. The version of ryu-manager is 2.23.2.
Shortly after starting it gives me this error:
/usr/local/lib/python2.7/dist-packages/ryu/topology/switches.py:478: UserWarning:
Datapath#ports is kept for compatibility with the previous openflow versions (< 1.3).
This not be updated by EventOFPPortStatus message. If you want to be updated,
you can use 'ryu.controller.dpset' or 'ryu.topology.switches'.
for port in dp.ports.values():
What's really weird to me is that it recommends to use ryu.topology.switches, but that error is triggered by line 478 of that very file!
The function in question is this:
class Switches(app_manager.RyuApp):
OFP_VERSIONS = [ofproto_v1_0.OFP_VERSION, ofproto_v1_2.OFP_VERSION,
ofproto_v1_3.OFP_VERSION, ofproto_v1_4.OFP_VERSION]
_EVENTS = [event.EventSwitchEnter, event.EventSwitchLeave,
event.EventPortAdd, event.EventPortDelete,
event.EventPortModify,
event.EventLinkAdd, event.EventLinkDelete]
DEFAULT_TTL = 120 # unused. ignored.
LLDP_PACKET_LEN = len(LLDPPacket.lldp_packet(0, 0, DONTCARE_STR, 0))
LLDP_SEND_GUARD = .05
LLDP_SEND_PERIOD_PER_PORT = .9
TIMEOUT_CHECK_PERIOD = 5.
LINK_TIMEOUT = TIMEOUT_CHECK_PERIOD * 2
LINK_LLDP_DROP = 5
#...
def _register(self, dp):
assert dp.id is not None
self.dps[dp.id] = dp
if dp.id not in self.port_state:
self.port_state[dp.id] = PortState()
for port in dp.ports.values(): # THIS LINE
self.port_state[dp.id].add(port.port_no, port)
Has anyone else encountered this problem before? How can I fix it?
I ran into the same issue (depending on your application, maybe it's not a problem, just a warning that you can ignore). Here is what I figured out after a find . -type f | xargs grep "ports is kept"
This warning is triggered in ryu.topology.switches, by a call to _get_ports() in class Datapath of file ryu/controller/controller.py.
class Datapath(ofproto_protocol.ProtocolDesc):
#......
def _get_ports(self):
if (self.ofproto_parser is not None and
self.ofproto_parser.ofproto.OFP_VERSION >= 0x04):
message = (
'Datapath#ports is kept for compatibility with the previous '
'openflow versions (< 1.3). '
'This not be updated by EventOFPPortStatus message. '
'If you want to be updated, you can use '
'\'ryu.controller.dpset\' or \'ryu.topology.switches\'.'
)
warnings.warn(message, stacklevel=2)
return self._ports
def _set_ports(self, ports):
self._ports = ports
# To show warning when Datapath#ports is read
ports = property(_get_ports, _set_ports)
My understanding is that if the warning is from ryu.topology.switches or ryu.controller.dpset, you can ignore it; because those two classes handle the event for you. But if you use Datapath directly, port status is not updated automatically. Anyone correct me if I'm wrong.
class Switches(app_manager.RyuApp):
#......
#set_ev_cls(ofp_event.EventOFPPortStatus, MAIN_DISPATCHER)
def port_status_handler(self, ev):
I have encountered that problem before but I just ignored it and so far every thing has been working as it was expected.
If you are trying to learn the topology I would recommend using ryu.topology.api. i.e.
from ryu.topology.api import get_switch, get_link
There is this tutorial. However there are some of the stuff missing.
Here is what I have so far: Controller.py
In the Controller.py the two functions get_switch(self, None) and get_link(self, None) would give you list of links and switches.