RRDtool data inconsistency - rrdtool

I am using RRDtool to graph the status of a pump on my Raspberry Pi. I must be doing some configuration wrong, since the values are close to what I am inputting, but not exact.
The pin status should be either 1 or 0.
<!-- 2014-03-10 10:24:00 CDT / 1394465040 --> <row><v>NaN</v></row>
<!-- 2014-03-10 10:25:00 CDT / 1394465100 --> <row><v>NaN</v></row>
<!-- 2014-03-10 10:26:00 CDT / 1394465160 --> <row><v>1.0000000000e+00</v></row>
<!-- 2014-03-10 10:27:00 CDT / 1394465220 --> <row><v>2.3711630000e-01</v></row>
<!-- 2014-03-10 10:28:00 CDT / 1394465280 --> <row><v>9.8168226667e-01</v></row>
<!-- 2014-03-10 10:29:00 CDT / 1394465340 --> <row><v>1.6624716667e-02</v></row>
<!-- 2014-03-10 10:30:00 CDT / 1394465400 --> <row><v>9.8544061667e-01</v></row>
<!-- 2014-03-10 10:31:00 CDT / 1394465460 --> <row><v>2.9590616667e-02</v></row>
<!-- 2014-03-10 10:32:00 CDT / 1394465520 --> <row><v>9.7204963333e-01</v></row>
<!-- 2014-03-10 10:33:00 CDT / 1394465580 --> <row><v>2.6263616667e-02</v></row>
<!-- 2014-03-10 10:34:00 CDT / 1394465640 --> <row><v>9.7533411667e-01</v></row>
<!-- 2014-03-10 10:35:00 CDT / 1394465700 --> <row><v>2.3075633333e-02</v></row>
<!-- 2014-03-10 10:36:00 CDT / 1394465760 --> <row><v>9.7849575000e-01</v></row>
<!-- 2014-03-10 10:37:00 CDT / 1394465820 --> <row><v>1.9948233333e-02</v></row>
<!-- 2014-03-10 10:38:00 CDT / 1394465880 --> <row><v>9.8158333333e-01</v></row>
<!-- 2014-03-10 10:39:00 CDT / 1394465940 --> <row><v>1.6888216667e-02</v></row>
<!-- 2014-03-10 10:40:00 CDT / 1394466000 --> <row><v>9.2141166667e-01</v></row>
<!-- 2014-03-10 10:41:00 CDT / 1394466060 --> <row><v>5.2411610000e-01</v></row>
<!-- 2014-03-10 10:42:00 CDT / 1394466120 --> <row><v>5.2411610000e-01</v></row>
<!-- 2014-03-10 10:43:00 CDT / 1394466180 --> <row><v>9.6672030000e-01</v></row>
<!-- 2014-03-10 10:44:00 CDT / 1394466240 --> <row><v>5.0939110833e-01</v></row>
<!-- 2014-03-10 10:45:00 CDT / 1394466300 --> <row><v>5.0939110833e-01</v></row>
<!-- 2014-03-10 10:46:00 CDT / 1394466360 --> <row><v>4.9845539167e-01</v></row>
<!-- 2014-03-10 10:47:00 CDT / 1394466420 --> <row><v>4.9845539167e-01</v></row>
<!-- 2014-03-10 10:48:00 CDT / 1394466480 --> <row><v>9.9399037500e-01</v></row>
<!-- 2014-03-10 10:49:00 CDT / 1394466540 --> <row><v>9.9399037500e-01</v></row>
<!-- 2014-03-10 10:50:00 CDT / 1394466600 --> <row><v>2.6977033333e-02</v></row>
<!-- 2014-03-10 10:51:00 CDT / 1394466660 --> <row><v>9.7898348333e-01</v></row>
<!-- 2014-03-10 10:52:00 CDT / 1394466720 --> <row><v>9.7898348333e-01</v></row>
create_db.sh
#!/bin/bash
rrdtool create pinstats.rrd \
--step 60 \
DS:pump:GAUGE:600:0:1 \
RRA:MAX:0.5:1:2016
update.sh
#!/bin/sh
a=0
while [ "$a" == 0 ]; do
echo "pump ondate"
/home/pi/on.sh
/home/pi/graph.sh
pump=1
rrdtool update pinstats.rrd N:$pump
sleep 60
echo "pump offdate"
/home/pi/off.sh
/home/pi/graph.sh
pump=0
rrdtool update pinstats.rrd N:$pump
sleep 120
done

You are being affected by Data Normalisation.
This will adjust your values according to a linear approximation in order to make the time point lie on an interval boundary. IE, if your Interval is 5min, then the updated value will be as it was at 12:00, 12:05, 12:10 ... etc
This makes sense if you are graphing a large number which is a rate; the overall averages still work out and the data are at regular intervals. However, if you are using a Gauge data type with small integers it becomes problematic.
In order to avoid this, you have to update on an interval boundary rather than using N as your time point.
Try this shell code:
interval=60
timestamp=`date +%s`
num_intervals=`expr $timestamp / $interval`
adjusted_time=`expr $num_intervals '*' $interval`
rrdtool update pistats.rrd $adjusted_time:1
sleep $interval
adjusted_time=`expr $adjusted_time + $interval`
rrdtool update pistats.rrd $adjusted_time:0
This code ensures that the update times are exactly on an interval boundary, and hence no Data Normalisation is performed (actually, it becomes a null operation).
For more details, see Alex van den Bogaerdt's excellent tutorial here.

Related

Change the color of x SVG files

I'd like to change the color of at least 1.000 SVG files. The main problem I have is that the currently SVGs doesnt contain the "fill" attribute, so I have to add fill="X" at the end of the SVG tag.
Heres is an example of one SVG file:
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 19.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_1" " x="0px" y="0px"
viewBox="-236 286 30 30" style="enable-background:new -236 286 30 30;" xml:space="preserve">
Thanks for your help.
There are many possibilites to do that. The safest way would be to read the XML structure and then manipulate that. But for that specific example you could also use the following regex with e.g. sed or python:
With sed:
sed -E 's/xml:space=\"preserve\">/xml:space="preserve" fill="red" >/gm;t;d'
With Python:
import re
regex = r"xml:space=\"preserve\">"
test_str = ("<?xml version=\"1.0\" encoding=\"utf-8\"?>\n"
"<!-- Generator: Adobe Illustrator 19.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->\n"
"<svg version=\"1.1\" id=\"Layer_1\" \" x=\"0px\" y=\"0px\"\n"
" viewBox=\"-236 286 30 30\" style=\"enable-background:new -236 286 30 30;\" xml:space=\"preserve\">")
subst = "xml:space=\"preserve\" fill=\"red\" >"
# You can manually specify the number of replacements by changing the 4th argument
result = re.sub(regex, subst, test_str, 0, re.MULTILINE)
if result:
print (result)
you can also have a look at regex101.com/r/cIfbEd

nutch 1.16 parsechecker issue with file:/directory/ inputs

Building up from nutch 1.16 skips file:/directory styled links in file system crawl , I have been trying (and failing) to get nutch to crawl through different directories and subdirectories on a Windows 10 installation, calling commands with Cygwin.
The file dirs/seed.txt, used to initiate the crawl, contains the following:
file:/cygdrive/c/Users/abc/Desktop/anotherdirectory/
file:///cygdrive/c/Users/abc/Desktop/anotherdirectory/
file://localhost/cygdrive/c/Users/abc/Desktop/anotherdirectory/
Running cat ./dirs/seed.txt | ./bin/nutch normalizerchecker -stdin to check on how Nutch is normalizing (default regex-normalize.xml) yields
file:/cygdrive/c/Users/abc/Desktop/anotherdirectory/
file:/cygdrive/c/Users/abc/Desktop/anotherdirectory/
file:/localhost/cygdrive/c/Users/abc/Desktop/anotherdirectory/
While running cat ./dirs/seed.txt | ./bin/nutch filterchecker -stdin returns:
+file:/cygdrive/c/Users/abc/Desktop/anotherdirectory/
+file:///cygdrive/c/Users/abc/Desktop/anotherdirectory/
+file://localhost/cygdrive/c/Users/abc/Desktop/anotherdirectory/
Meaning all directories are seen as valid. So far, so good, but then, running the following:
cat ./dirs/seed.txt | ./bin/nutch parsechecker -stdin
yields the same error for all three directories, namely:
Fetch failed with protocol status: notfound(14), lastModified=0
The files in logs also do not really tell me anything of what went wrong, just that it won't read the input no matter what, as the logs only contain a "fetching directory X" message per entry...
So what exactly is going on here? I'll also leave the nutch-site.xml , regex-urlfilter.txt and regex-normalize.xml files, for completeness' sake.
nutch-site.xml :
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>http.agent.name</name>
<value>NutchSpiderTest</value>
</property>
<property>
<name>http.robots.agents</name>
<value>NutchSpiderTest,*</value>
<description>The agent strings we'll look for in robots.txt files,
comma-separated, in decreasing order of precedence. You should
put the value of http.agent.name as the first agent name, and keep the
default * at the end of the list. E.g.: BlurflDev,Blurfl,*
</description>
</property>
<property>
<name>http.agent.description</name>
<value>I am just testing nutch, please tell me if it's bothering your website</value>
<description>Further description of our bot- this text is used in
the User-Agent header. It appears in parenthesis after the agent name.
</description>
</property>
<property>
<name>plugin.includes</name>
<value>protocol-file|urlfilter-regex|parse-(html|tika|text)|index-(basic|anchor)|indexer-solr|scoring-opic|urlnormalizer-(pass|regex|basic)</value>
<description>Regular expression naming plugin directory names to
include. Any plugin not matching this expression is excluded.
By default Nutch includes plugins to crawl HTML and various other
document formats via HTTP/HTTPS and indexing the crawled content
into Solr. More plugins are available to support more indexing
backends, to fetch ftp:// and file:// URLs, for focused crawling,
and many other use cases.
</description>
</property>
<property>
<name>file.content.limit</name>
<value>-1</value>
<description> Needed to stop buffer overflow errors - Unable to read.....</description>
</property>
<property>
<name>file.crawl.parent</name>
<value>false</value>
<description>The crawler is not restricted to the directories that you specified in the
Urls file but it is jumping into the parent directories as well. For your own crawlings you can
change this behavior (set to false) the way that only directories beneath the directories that you specify get
crawled.</description>
</property>
<property>
<name>parser.skip.truncated</name>
<value>false</value>
<description>Boolean value for whether we should skip parsing for truncated documents. By default this
property is activated due to extremely high levels of CPU which parsing can sometimes take.
</description>
</property>
<!-- the following is just an attempt at using a solution I found elsewhere, didn't work -->
<property>
<name>http.robot.rules.whitelist</name>
<value>file:/cygdrive/c/Users/abc/Desktop/anotherdirectory/</value>
<description>Comma separated list of hostnames or IP addresses to ignore
robot rules parsing for. Use with care and only if you are explicitly
allowed by the site owner to ignore the site's robots.txt!
</description>
</property>
</configuration>
regex-urlfilter.txt:
# The default url filter.
# Better for whole-internet crawling.
# Please comment/uncomment rules to your needs.
# Each non-comment, non-blank line contains a regular expression
# prefixed by '+' or '-'. The first matching pattern in the file
# determines whether a URL is included or ignored. If no pattern
# matches, the URL is ignored.
# skip http: ftp: mailto: and https: urls
-^(http|ftp|mailto|https):
# This change is not necessary but may make your life easier.
# Any file types you do not want to index need to be added to the list otherwise
# Nutch will often try to parse them and fail in doing so as it doesnt know
# how to deal with a lot of binary file types.:
# skip image and other suffixes we can't yet parse
# for a more extensive coverage use the urlfilter-suffix plugin
#-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|CSS|sit|SIT|eps|EPS
#|wmf|WMF|zip|ZIP|ppt|PPT|mpg|MPG|gz|GZ|rpm|RPM|tgz|TGZ|mov
#|MOV|exe|EXE|jpeg|JPEG|bmp|BMP|js|JS|asp|ASP|xxx|XXX|yyy|YYY
#|cs|CS|dll|DLL|refresh|REFRESH)$
# skip URLs longer than 2048 characters, see also db.max.outlink.length
#-^.{2049,}
# skip image and other suffixes we can't yet parse
# for a more extensive coverage use the urlfilter-suffix plugin
#-(?i)\.(?:gif|jpg|png|ico|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|exe|jpeg|bmp|js)$
# skip URLs containing certain characters as probable queries, etc.
-[?*!#=]
# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
+.*(/[^/]+)/[^/]+\1/[^/]+\1/
# For safe web crawling if crawled content is exposed in a public search interface:
# - exclude private network addresses to avoid that information
# can be leaked by placing links pointing to web interfaces of services
# running on the crawling machines (e.g., HDFS, Hadoop YARN)
# - in addition, file:// URLs should be either excluded by a URL filter rule
# or ignored by not enabling protocol-file
#
# - exclude localhost and loop-back addresses
# http://localhost:8080
# http://127.0.0.1/ .. http://127.255.255.255/
# http://[::1]/
#-^https?://(?:localhost|127(?:\.(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))){3}|\[::1\])(?::\d+)?(?:/|$)
#
# - exclude private IP address spaces
# 10.0.0.0/8
#-^https?://(?:10(?:\.(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))){3})(?::\d+)?(?:/|$)
# 192.168.0.0/16
#-^https?://(?:192\.168(?:\.(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))){2})(?::\d+)?(?:/|$)
# 172.16.0.0/12
#-^https?://(?:172\.(?:1[6789]|2[0-9]|3[01])(?:\.(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))){2})(?::\d+)?(?:/|$)
# accept anything else
+.
regex-normalize.txt:
<?xml version="1.0"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- This is the configuration file for the RegexUrlNormalize Class.
This is intended so that users can specify substitutions to be
done on URLs using the Java regex syntax, see
https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html
The rules are applied to URLs in the order they occur in this file. -->
<!-- WATCH OUT: an xml parser reads this file an ampersands must be
expanded to & -->
<!-- The following rules show how to strip out session IDs, default pages,
interpage anchors, etc. Order does matter! -->
<regex-normalize>
<!-- removes session ids from urls (such as jsessionid and PHPSESSID) -->
<regex>
<pattern>(?i)(;?\b_?(l|j|bv_)?(sid|phpsessid|sessionid)=.*?)(\?|&|#|$)</pattern>
<substitution>$4</substitution>
</regex>
<!-- changes default pages into standard for /index.html, etc. into /
<regex>
<pattern>/((?i)index|default)\.((?i)js[pf]{1}?[afx]?|cgi|cfm|asp[x]?|[psx]?htm[l]?|php[3456]?)(\?|&|#|$)</pattern>
<substitution>/$3</substitution>
</regex> -->
<!-- removes interpage href anchors such as site.com#location -->
<regex>
<pattern>#.*?(\?|&|$)</pattern>
<substitution>$1</substitution>
</regex>
<!-- cleans ?&var=value into ?var=value -->
<regex>
<pattern>\?&</pattern>
<substitution>\?</substitution>
</regex>
<!-- cleans multiple sequential ampersands into a single ampersand -->
<regex>
<pattern>&{2,}</pattern>
<substitution>&</substitution>
</regex>
<!-- removes trailing ? -->
<regex>
<pattern>[\?&\.]$</pattern>
<substitution></substitution>
</regex>
<!-- normalize file:/// protocol prefix: -->
<!-- keep one single slash (NUTCH-1483) -->
<regex>
<pattern>^file://+</pattern>
<substitution>file:/</substitution>
</regex>
<!-- removes duplicate slashes but -->
<!-- * allow 2 slashes after colon ':' (indicating protocol) -->
<regex>
<pattern>(?<!:)/{2,}</pattern>
<substitution>/</substitution>
</regex>
</regex-normalize>
Any idea what I'm doing wrong here?
Nutch's file: protocol implementation "fetches" local files by creating a File object using the path component of the URL: /cygdrive/c/Users/abc/Desktop/anotherdirectory/. As stated in the discussion "Is there a java sdk for cygwin?", Java does not translate the path, but replacing cygdrive/c/ by c:/ should work.

Unable to match XML element using Python regular expression

I have an XML document with the following structure-
> <?xml version="1.0" encoding="UTF-8"?> <!-- generated by CLiX/Wiki2XML
> [MPI-Inf, MMCI#UdS] $LastChangedRevision: 93 $ on 17.04.2009
> 12:50:48[mciao0826] --> <!DOCTYPE article SYSTEM "../article.dtd">
> <article xmlns:xlink="http://www.w3.org/1999/xlink"> <header>
> <title>Postmodern art</title> <id>192127</id> <revision>
> <id>244517133</id> <timestamp>2008-10-11T05:26:50Z</timestamp>
> <contributor> <username>FairuseBot</username> <id>1022055</id>
> </contributor> </revision> <categories> <category>Contemporary
> art</category> <category>Modernism</category> <category>Art
> movements</category> <category>Postmodern art</category> </categories>
> </header> <bdy> Postmodernism preceded by Modernism '' Postmodernity
> Postchristianity Postmodern philosophy Postmodern architecture
> Postmodern art Postmodernist film Postmodern literature Postmodern
> music Postmodern theater Critical theory Globalization Consumerism
> </bdy>
I am interested in capturing the text contained within ... and for that I wrote the following Python 3 regex code-
file = open("sample_xml.xml", "r")
xml_doc = file.read()
file.close()
body_text = re.findall(r'<bdy>(.+)</bdy>', xml_doc)
But 'body_text' is always returning an empty list. However, when I try to capture the text for the tags ... using code-
category_text = re.findall(r'(.+)', xml_doc)
This does the job.
Any idea(s) as to why the ... XML element code is not working?
Thanks!
The special character . will not match a newline, so that regex will not match a multiline string.
You can change this behavior by specifying the DOTALL flag. To specify that flag you can include this at the start of your regular expression: (?s)
More information on Python's regular expression syntax can be found here: https://docs.python.org/3/library/re.html#regular-expression-syntax
You can use re.DOTALL
category_text = re.findall(r'<bdy>(.+)</bdy>', xml_doc, re.DOTALL)
Output:
[" Postmodernism preceded by Modernism '' Postmodernity\n> Postchristianity Postmodern philosophy Postmodern architecture\n> Postmodern art Postmodernist film Postmodern literature Postmodern\n> music Postmodern theater Critical theory Globalization Consumerism\n> "]

How to read out tc result verdict in an xml file with Python and ElementTree

I'm using Python 2.7.8 and have a problem using ElementTree to read out specific elements. I have an xml with several <ATC> </ATC> tags. For every of this tag I want to read out tcName and the verdict result for that tcName.
I'm trying to read out TestCase name ("REQPROD 232 Read IO") in the tag <OriginalTestVersionName> and the verdict result "NotExecuted" in the tag <Verdict Writable="true"> for that specific testCase from an xml file.
The printout from my function should be:
"REQPROD 232 Read IO"
"NotExecuted"
I have no problem to read out the TestCaseName, but how to read out the verdict result? The problem is also that inside the tags <ATC> </ATC> there are several tags with verdict -> <Verdict Writable="true">. I only want to read the first verdict.
This is my code:
import xml.etree.ElementTree as ET
def openEmptyXML():
xmltree = ET.parse('Automatimport.xml')
testCaseContainer = xmltree.iter('ATC')
for i in testCaseContainer:
testCaseName = i.iter()
for i in testCaseName:
if "OriginalTestVersionName" in i.tag:
tcName = i.text
if 'REQPROD 232 Read IO' in tcName:
print tcName
verdictResult = i.iter()
for i in verdictResult:
if "Verdict" in i.tag:
print i.text
Here is the xml file:
<ATC>
<VersionNumber>37129</VersionNumber>
<ElementNumber>19245</ElementNumber>
<ElementName>ATESTCASE 19205</ElementName>
<Name>ATESTCASE : REQPROD 232 Read IO</Name>
<ConfigOrder>-1</ConfigOrder>
<AdditionalInformation Writable="true" />
<SeverityLevel />
<Precondition Writable="false" Filepath="EMPTY-FILE"></Precondition>
<Postcondition Writable="false" Filepath="EMPTY-FILE"></Postcondition>
<OverrideInformation Writable="true" />
<Description Writable="false" Filepath="EMPTY-FILE"></Description>
<ExecutionDate Writable="true">2015-01-22</ExecutionDate>
<OriginalTest>19051</OriginalTest>
<OriginalTestVersionNumber>36795</OriginalTestVersionNumber>
<OriginalTestVersionName>REQPROD 232 Read IO</OriginalTestVersionName>
<OverrideVerdictActive Writable="true">false</OverrideVerdictActive>
<Purpose Writable="false" Filepath="EMPTY-FILE"></Purpose>
<Tester Writable="true">Tester1</Tester>
<Verdict Writable="true">NotExecuted</Verdict>
<Procedure>
<PreCondition />
<PostCondition />
<ProcedureID>36797</ProcedureID>
<ProcedureName>Procedure1</ProcedureName>
<SequenceNumber>0</SequenceNumber>
<CID>-1</CID>
<Verdict Writable="true">NotExecuted</Verdict>
<OverrideVerdictActive Writable="true">false</OverrideVerdictActive>
<APComment Writable="true"></APComment>
<Result>
<SequenceNumber>1</SequenceNumber>
<Action>12</Action>
<ExpectedResult>1221</ExpectedResult>
<Result Writable="true"></Result>
<Verdict Writable="true">NotExecuted</Verdict>
</Result>
</Procedure>
<Requirement>
<VersionNumber>2824</VersionNumber>
<ElementNumber>6994</ElementNumber>
<ElementName>REQPROD 2393</ElementName>
<Name>Req 1234</Name>
<ID>2363</ID>
<Description Writable="false" Filepath=".\xhtml\xxxx.html">some description print out.....</Description>
<Purpose Writable="false">Purpose of tc is....</Purpose>
</Requirement>
<OriginalTestcase>
<VersionNumber>36732</VersionNumber>
<ElementNumber>1905</ElementNumber>
<ElementName>TESTCASE 19179</ElementName>
<Name>TESTCASE : REQPROD 232 Read IO</Name>
<SeverityLevel />
<AsilToBeTested />
<TestClass />
<TestTechnique />
</OriginalTestcase>
</ATC>
Do any one have a solution how to read out the tcName and it's result verdict (line 20 in xml file) "NotExecuted"?
********ADDED 2015-01-29*************
This is the first part of xml file to the first <ATC> element:
<?xml version="1.0" encoding="utf-16"?>
<ExchangeFormat xmlns:xsi="http://www.w.org/" xmlns:xsd="http://www.w.org" xsi:noNamespace="C:\Program Files\ExchangeFormat.xsd">
<SchemaVersion>1.3</SchemaVersion>
<TestOrder>
<VersionNumber>3712</VersionNumber>
<ElementNumber>1920</ElementNumber>
<ElementName>TESTORDER 1920</ElementName>
<Name>TESTORDER :Automatimport / MAIN; 0</Name>
<TestSuite>
<VersionNumber>3712</VersionNumber>
<ElementNumber>1920</ElementNumber>
<ElementName>TESTSUITE 192</ElementName>
<Name>TESTSUITE : Example_1</Name>
<ATC>
**se above**
Here is one way of getting the expected output using findall() and find(). This will print the text content of all three Verdict elements.
import xml.etree.ElementTree as ET
def openEmptyXML():
doc = ET.parse("Automatimport.xml")
for atc in doc.findall(".//ATC"):
versionname = atc.find("OriginalTestVersionName")
tcName = versionname.text
if tcName == 'REQPROD 232 Read IO':
verdict1 = atc.find("Verdict")
verdict2 = atc.find("Procedure/Verdict")
verdict3 = atc.find("Procedure/Result/Verdict")
print tcName
print verdict1.text
print verdict2.text
print verdict3.text
openEmptyXML()

IIS 7.5 + PHP + httpOnlyCookies + requireSSL

How do I enable the httpOnlyCookies and requireSSL for all the cookie in IIS 7.5 ?
I have tried adding
<httpCookies httpOnlyCookies="true" requireSSL="true" />
within the
<system.webServer>
but it show 500 Internal Error.
Edit the php.ini and find the line:
session.cookie_httponly =
Set this value to true (e.g. to 1 as I have had issues with true for some reason) and restart IIS once you're done editing the php.ini.