I'm currently trying to join two tables together in AWS Glue from a RDS instance and after successfully crawling the database structure I'm setting up my job with:
table1 = (
glueContext.create_dynamic_frame
.from_catalog(database="transact", table_name="transact_table1")
)
table1.printSchema()
print "Count: ", table1.count()
table2 = (
glueContext.create_dynamic_frame
.from_catalog(database="transact", table_name="transact_table2")
)
table2.printSchema()
print "Count: ", table2.count()
Strangely, the job fails with:
File "/mnt/yarn/usercache/root/appcache/application_1520463557704_0008/container_1520463557704_0008_01_000001/PyGlue.zip/awsglue/dynamicframe.py", line 275, in count
File "/mnt/yarn/usercache/root/appcache/application_1520463557704_0008/container_1520463557704_0008_01_000001/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/mnt/yarn/usercache/root/appcache/application_1520463557704_0008/container_1520463557704_0008_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/mnt/yarn/usercache/root/appcache/application_1520463557704_0008/container_1520463557704_0008_01_000001/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o64.count.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 4 times, most recent failure: Lost task 0.3 in stage 4.0 (TID 26, ip-10-1-2-4.ec2.internal, executor 1): java.lang.NullPointerException
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$13.apply(JdbcUtils.scala:427)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$13.apply(JdbcUtils.scala:425)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:286)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:268)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
The curious part is that it only happens in the second count(), because I can see in the logs that it correctly prints out the table1 schema and count. And it also prints the table2 schema.
I did this because I was trying to Join.apply those dynamic frames together and it was failing with NullPointerException too. What gives? Am I missing a bit of configuration perhaps?
UPDATE 1:
It appears that it's a problem with that second table in particular. Picking some other table from the catalog to serve as table2 makes the Job succeed.
So I'll morph the question into: how must a table from the catalog be different from the others that makes it raise this error?
print "Count: ", table2count()
is it a typo error while posting the question to SO, if not please check the code ? Shouldn't it be table2.count() ?
Related
the command is pretty vanilla:
az sql server ad-admin create --display-name 'some group' --object-id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' --resource-group my-group --server my-server
The command works when i run it in a terminal, and other az commands run in the script, but when the script hits this line - no matter where i place it - i get the following error message.
Any ideas?
ERROR: create_or_update() missing 1 required positional argument: 'parameters'
2020-04-09T22:11:13.3286506Z Traceback (most recent call last):
2020-04-09T22:11:13.3287125Z File "/opt/az/lib/python3.6/site-packages/knack/cli.py", line 206, in invoke
2020-04-09T22:11:13.3287519Z cmd_result = self.invocation.execute(args)
2020-04-09T22:11:13.3288177Z File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 608, in execute
…2020-04-09T22:11:13.3294117Z return …T22:11:13.3294770Z File "/opt/az/lib/python3.6/site-packages/azure/cli/core/__init__.py", line 493, in default_command_handler
2020-04-09T22:11:13.3295184Z return op(**command_args)
2020-04-09T22:11:13.3295845Z File "/opt/az/lib/python3.6/site-packages/azure/cli/command_modules/sql/custom.py", line 2074, in server_ad_admin_set
2020-04-09T22:11:13.3296258Z properties=kwargs)
2020-04-09T22:11:13.3296834Z TypeError: create_or_update() missing 1 required positional argument: 'parameters'
Try these commands in different lines. It might fix the error or can help you identify which command has the error(s) specifically.
az sql server ad-admin create --display-name 'some group'
az sql server ad-admin create --object-id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
az sql server ad-admin create --resource-group my-group
az sql server ad-admin create --server my-server
You can also try removing ' and ' and add = in between command name and value.
Here is one approach:
az sql server ad-admin create --display-name=some group --object-id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --resource-group=my-group --server=my-server
I'm trying to use a multidelimiter in a table insert for a hive job in emr on amazon aws. As explained in this link. The delimiter for the file is "|".
https://cwiki.apache.org/confluence/display/Hive/MultiDelimitSerDe
However, I ended up having to use...
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe'
Instead of the documented...
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.MultiDelimitSerDe'
in order for it to not give me this error.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde: org.apache.hadoop.hive.serde2.MultiDelimitSerDe
OK. So when I don't get that error, by adding the .contrib, I get this error which is caused by Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1548264520414_0027_1_00, diagnostics=[Task failed, taskId=task_1548264520414_0027_1_00_000021, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1548264520414_0027_1_00_000021_0:java.lang.RuntimeException: java.lang.RuntimeException: Map operator initialization failed
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Map operator initialization failed
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:354)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:184)
... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found
at org.apache.hadoop.hive.ql.exec.MapOperator.getConvertedOI(MapOperator.java:328)
at org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:420)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:286)
... 15 more
So I've been reading that you have to add the .jar file.
https://community.hortonworks.com/questions/82189/hive-cannot-see-jar.html
And so I've tried all kinds of things to get this to work. It says that it is adding it it to the class path.
hive> add jar /usr/lib/hive/lib/hive-contrib-2.3.3-amzn-1.jar
> ;
Added [/usr/lib/hive/lib/hive-contrib-2.3.3-amzn-1.jar] to class path
Added resources: [/usr/lib/hive/lib/hive-contrib-2.3.3-amzn-1.jar]
hive> add jar /usr/lib/hive/lib/hive-contrib.jar
> ;
Added [/usr/lib/hive/lib/hive-contrib.jar] to class path
Added resources: [/usr/lib/hive/lib/hive-contrib.jar]
hive> exit;
So I'm not sure what to do. It's acting as if the .jar file for hive-contrib isn't in the class path despite me adding it. I've also tried running...
export HADOOP_USER_CLASSPATH_FIRST=true
which is found here...
How to include jars in Hive (Amazon Hadoop env)
And that doesn't fix it either.
How can I use a multidelimiter SerDe property for a hive job on aws?
Thank you.
I could not get MultiDelimitSerDe to work. Instead, I was lucky in that the delimiter had quotations on either side of the pipe. So it looks like "|". This turns the values between the quotes into strings, so the additional pipes in those column values don't act as delimiters.
"Test | Test2 "|" Test3 | Test 4 | Test 5 "|" Test 6 "
You can see an explanation in the link below. The part that talks about it is in the comments, not the article.
https://www.ericlin.me/2015/07/how-to-create-a-hive-multi-character-delimitered-table/
If I didn't have those quotation marks around the delimiter, I'm not sure how I would have been able to work with a multi delimiter. Especially if I had quotations in any of my fields, but after checking, out of the billions of rows, there is not a single quote.
For small s3 input files (~10GB), glue ETL job works fine but for the larger dataset (~200GB), the job is failing.
Adding a part of ETL code.
# Converting Dynamic frame to dataframe
df = dropnullfields3.toDF()
# create new partition column
partitioned_dataframe = df.withColumn('part_date', df['timestamp_utc'].cast('date'))
# store the data in parquet format on s3
partitioned_dataframe.write.partitionBy(['part_date']).format("parquet").save(output_lg_partitioned_dir, mode="append")
Job executed for 4 hours and threw error.
File "script_2017-11-23-15-07-32.py", line 49, in
partitioned_dataframe.write.partitionBy(['part_date']).format("parquet").save(output_lg_partitioned_dir,
mode="append") File
"/mnt/yarn/usercache/root/appcache/application_1511449472652_0001/container_1511449472652_0001_02_000001/pyspark.zip/pyspark/sql/readwriter.py",
line 550, in save File
"/mnt/yarn/usercache/root/appcache/application_1511449472652_0001/container_1511449472652_0001_02_000001/py4j-0.10.4-src.zip/py4j/java_gateway.py",
line 1133, in call File
"/mnt/yarn/usercache/root/appcache/application_1511449472652_0001/container_1511449472652_0001_02_000001/pyspark.zip/pyspark/sql/utils.py",
line 63, in deco File
"/mnt/yarn/usercache/root/appcache/application_1511449472652_0001/container_1511449472652_0001_02_000001/py4j-0.10.4-src.zip/py4j/protocol.py",
line 319, in get_return_value py4j.protocol.Py4JJavaError: An error
occurred while calling o172.save. : org.apache.spark.SparkException:
Job aborted. at
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:147)
at
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
at
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:121)
at
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
at
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
at
org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:492)
at
org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
at
org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at
py4j.Gateway.invoke(Gateway.java:280) at
py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79) at
py4j.GatewayConnection.run(GatewayConnection.java:214) at
java.lang.Thread.run(Thread.java:748) Caused by:
org.apache.spark.SparkException: Job aborted due to stage failure:
Total size of serialized results of 3385 tasks (1024.1 MB) is bigger
than spark.driver.maxResultSize (1024.0 MB) at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257) at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918) at
org.apache.spark.SparkContext.runJob(SparkContext.scala:1931) at
org.apache.spark.SparkContext.runJob(SparkContext.scala:1951) at
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:127)
... 30 more
End of LogType:stdout
I would appreciate it if you could provide any guidance to resolve this issue.
You can only set configurable options like maxResultSize during context instantiation, and glue provides you with a context (from memory you can't instantiate a new context). I don't think you will be able to change the value of this property.
You'll normally get this error if you collect results to the driver which exceed the specified size. You aren't doing that in this case so the error is confusing.
It seems like you are spawning 3385 tasks, which are presumably related to the dates in your input file (3385 dates, ~9 years?). You might try writing this file in batches, e.g.
partitioned_dataframe = df.withColumn('part_date', df['timestamp_utc'].cast('date'))
for year in range(2000,2018):
partitioned_dataframe = partitioned_dateframe.where(year(part_date) = year)
partitioned_dataframe.write.partitionBy(['part_date'])
.format("parquet")
.save(output_lg_partitioned_dir, mode="append")
I haven't checked this code; you'll at least need to import pyspark.sql.functions.year for it to work.
When I've done data processing with Glue I simply found that batching the work was more effective than trying to get large datasets be completed successfully. The system is good but hard to debug; the stability on large data doesn't come easily.
Connecting wso2am-2.0.0 and wso2am-analytics-2.0.0 on PGSQL (9.5) database (having common WSO2AM_STATS_DB database), we receive a following exception:
TID: [-1] [] ERROR {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} - Error in executing task: Error while saving dat
a to the table API_DESTINATION_SUMMARY : Job aborted due to stage failure: Task 0 in stage 54296.0 failed 1 times, most recent failure: Lost task 0.0
in stage 54296.0 (TID 50425, localhost): java.sql.BatchUpdateException: Batch entry 0 INSERT INTO API_DESTINATION_SUMMARY (api, version, apiPublisher,
context, destination, total_request_count, hostName, year, month, day, time) VALUES ('test01', 'v1.0.0', NULL, '/test/v1.0.0', 'http://demo6009762.mo
ckable.io', 1, 'wso2apimgr3', 2017, 1, 26, '2017-01-26 15:59') ON CONFLICT (api,version,apiPublisher,context,destination,hostName,year,month,day) DO U
PDATE SET total_request_count=EXCLUDED.total_request_count, time=EXCLUDED.time was aborted: ERROR: null value in column "apipublisher" violates not-nu
ll constraint
full exception is here.
According to the logs the direct cause is that the apipublisher field is null what should not happen.
So now I have a few questions:
How do I prevent that? How do I configure the apipublisher value?. And How do I get rid of the invalid data
Thank you for any hint
There is a reported issue for this. You can apply the fix mentioned in the jira ticket.
the command of sending a sql script to node or node group is working fine but the issue is with parsing the file itself.
Here you are, the log of the target node
2014-08-27 16:51:12,130 ERROR [station-001] [DataLoaderService] [station-001-pull-1] Failed to load batch 000-31 because: In file: inline evaluation of: ``DROP TABLE ofep.PRODUCT_RESTRICTIONS;'' Encountered "ofep" at line 1, column 12.
java.lang.RuntimeException: In file: inline evaluation of: ``DROP TABLE ofep.PRODUCT_RESTRICTIONS;'' Encountered "ofep" at line 1, column 12.
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.script(DatabaseWriter.java:919)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.write(DatabaseWriter.java:196)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.write(DatabaseWriter.java:167)
at org.jumpmind.symmetric.io.data.writer.NestedDataWriter.write(NestedDataWriter.java:64)
at org.jumpmind.symmetric.model.ProcessInfoDataWriter.write(ProcessInfoDataWriter.java:65)
at org.jumpmind.symmetric.io.data.writer.NestedDataWriter.write(NestedDataWriter.java:64)
at org.jumpmind.symmetric.io.data.writer.TransformWriter.write(TransformWriter.java:217)
at org.jumpmind.symmetric.io.data.DataProcessor.forEachDataInTable(DataProcessor.java:194)
at org.jumpmind.symmetric.io.data.DataProcessor.forEachTableInBatch(DataProcessor.java:164)
at org.jumpmind.symmetric.io.data.DataProcessor.process(DataProcessor.java:114)
at org.jumpmind.symmetric.service.impl.DataLoaderService$LoadIntoDatabaseOnArrivalListener.end(DataLoaderService.java:779)
at org.jumpmind.symmetric.io.data.writer.StagingDataWriter.notifyEndBatch(StagingDataWriter.java:75)
at org.jumpmind.symmetric.io.data.writer.AbstractProtocolDataWriter.end(AbstractProtocolDataWriter.java:220)
at org.jumpmind.symmetric.io.data.DataProcessor.process(DataProcessor.java:124)
at org.jumpmind.symmetric.service.impl.DataLoaderService.loadDataFromTransport(DataLoaderService.java:407)
at org.jumpmind.symmetric.service.impl.DataLoaderService.loadDataFromPull(DataLoaderService.java:265)
at org.jumpmind.symmetric.service.impl.PullService.execute(PullService.java:129)
at org.jumpmind.symmetric.service.impl.NodeCommunicationService$2.run(NodeCommunicationService.java:307)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: In file: inline evaluation of: ``DROP TABLE ofep.PRODUCT_RESTRICTIONS;'' Encountered "ofep" at line 1, column 12.
at bsh.Parser.generateParseException(Parser.java:6068)
at bsh.Parser.jj_consume_token(Parser.java:5939)
at bsh.Parser.BlockStatement(Parser.java:2780)
at bsh.Parser.Line(Parser.java:147)
at bsh.Interpreter.Line(Interpreter.java:1000)
at bsh.Interpreter.eval(Interpreter.java:635)
at bsh.Interpreter.eval(Interpreter.java:739)
at bsh.Interpreter.eval(Interpreter.java:728)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.script(DatabaseWriter.java:916)
... 20 more
2014-08-27 16:51:12,470 ERROR [station-001] [DataLoaderService] [station-001-pull-1] Failed while parsing batch
java.lang.RuntimeException: In file: inline evaluation of: ``DROP TABLE ofep.PRODUCT_RESTRICTIONS;'' Encountered "ofep" at line 1, column 12.
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.script(DatabaseWriter.java:919)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.write(DatabaseWriter.java:196)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.write(DatabaseWriter.java:167)
at org.jumpmind.symmetric.io.data.writer.NestedDataWriter.write(NestedDataWriter.java:64)
at org.jumpmind.symmetric.model.ProcessInfoDataWriter.write(ProcessInfoDataWriter.java:65)
at org.jumpmind.symmetric.io.data.writer.NestedDataWriter.write(NestedDataWriter.java:64)
at org.jumpmind.symmetric.io.data.writer.TransformWriter.write(TransformWriter.java:217)
at org.jumpmind.symmetric.io.data.DataProcessor.forEachDataInTable(DataProcessor.java:194)
at org.jumpmind.symmetric.io.data.DataProcessor.forEachTableInBatch(DataProcessor.java:164)
at org.jumpmind.symmetric.io.data.DataProcessor.process(DataProcessor.java:114)
at org.jumpmind.symmetric.service.impl.DataLoaderService$LoadIntoDatabaseOnArrivalListener.end(DataLoaderService.java:779)
at org.jumpmind.symmetric.io.data.writer.StagingDataWriter.notifyEndBatch(StagingDataWriter.java:75)
at org.jumpmind.symmetric.io.data.writer.AbstractProtocolDataWriter.end(AbstractProtocolDataWriter.java:220)
at org.jumpmind.symmetric.io.data.DataProcessor.process(DataProcessor.java:124)
at org.jumpmind.symmetric.service.impl.DataLoaderService.loadDataFromTransport(DataLoaderService.java:407)
at org.jumpmind.symmetric.service.impl.DataLoaderService.loadDataFromPull(DataLoaderService.java:265)
at org.jumpmind.symmetric.service.impl.PullService.execute(PullService.java:129)
at org.jumpmind.symmetric.service.impl.NodeCommunicationService$2.run(NodeCommunicationService.java:307)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: In file: inline evaluation of: ``DROP TABLE ofep.PRODUCT_RESTRICTIONS;'' Encountered "ofep" at line 1, column 12.
at bsh.Parser.generateParseException(Parser.java:6068)
at bsh.Parser.jj_consume_token(Parser.java:5939)
at bsh.Parser.BlockStatement(Parser.java:2780)
at bsh.Parser.Line(Parser.java:147)
at bsh.Interpreter.Line(Interpreter.java:1000)
at bsh.Interpreter.eval(Interpreter.java:635)
at bsh.Interpreter.eval(Interpreter.java:739)
at bsh.Interpreter.eval(Interpreter.java:728)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.script(DatabaseWriter.java:916)
... 20 more
The script contains only one statement “DROP TABLE ofep.PRODUCT_RESTRICTIONS;”
Could you please help me?
Thanks,
Ayman
Symadmin has three different send subcommands...
send-sql Send SQL statement to node
send-schema Send schema change to node
send-script Send script to node
You used send-script which is used for sending BSH scripts.
What you want to use is send-sql.