Parametrising encoded web service requests - web-services

I am trying to parametrise web service requests in a web performance test. Using Fiddler2 I have recorded a sequence of over 60 web service requests for a transaction performed by my desktop application and saved them as a .webtest file. This web test runs without any errors and the responses that I have checked look correct.
When the web service requests are viewed in Visual Studio 2012 they appear in plain text and so I should be able to edit them to parametrise the values in the SOAP requests. For example, most of the requests contain the text <Database>db1a</Database> (actually it has <Database>db1a</Database>) and I want to change them to get the database name from a context parameter. There are several other items to replace with parameters. For this one transaction there are over 60 web service requests and I have other transactions to record. The .webtest file contains XML and the requests looks like:
<Request Method="POST" Version="1.1" Url="http://example.com/somewhere.asmx" ThinkTime="83" Timeout="60" ParseDependentRequests="True" FollowRedirects="True" RecordResult="True" Cache="False" ResponseTimeGoal="0" Encoding="utf-8">
<Headers>
<Header Name="Content-Type" Value="text/xml; charset=utf-8" />
<Header Name="SOAPAction" Value=""http://example.com/webservices/VariousActionNamesHere"" />
</Headers>
<StringHttpBody ContentType="text/xml; charset=utf-8">PAA/AHgAbQBsACAAdg
... lots more characters not shown
+AA==</StringHttpBody>
</Request>
The StringHttpBody field contains an encoded version of the SOAP request. Visual Studio shows it as plain text. What is the encoding of this field and how can I decode and encode it?
I have installed Release 3.0 of the “Web and Load Test Plugins for Visual Studio Team Test” from http://teamtestplugins.codeplex.com/ . They provide a slightly better interface for editing the SOAP requests one at a time. But they do not allow mass changes.
Converting the web test to a coded web test (ie into C#) shows the SOAP requests as simple text and they could be edited there but I would prefer to keep the flexibility of a .webtest file.
Update: I have posted a partial answer to the question. Whilst it works, it feels the wrong way to do the work because it feels too complicated. So I am looking for a better overall approach.

StringHttpBody is base64 encoded. The raw body of the request is converted to a UTF-16 byte array and then base64 encoded, like so:
Convert.ToBase64String(Encoding.Unicode.GetBytes(oSession.GetRequestBodyAsString()));
For a quick view, you can copy/paste this string into Fiddler's Tools > TextWizard screen then use the From Base64 option to decode.

Here is part of an answer to working with the StringHttpBody fields. This is about decoding and encoding the fields to allow easier understanding and modification.
Read the input XML and find the contents of the StringHttpBody fields. Replace each field contents with the result of calling the following routine on the original contents. Write the all the lines to a new intermediate file. The byte array contains UTF-16 characters as high and low bytes. (All the characters I have seen so far have high byte zero.)
private string DecodeBody(string source) {
byte[] outBytes = Convert.FromBase64String(source);
StringBuilder sb = new StringBuilder();
Assert( (outBytes.Length % 2) != 0 );
for (int ix = 0; ix < outBytes.Length; ix += 2) {
Assert(outBytes[ix] != 0);
sb.Append((char)outBytes[ix + 1]);
}
return sb.ToString();
}
Now have a file containing a simple text version of the .webtest file. This file can easily be edited to parameterise fields of the requests. Have an routine used similar to the one above and writing to another intermediate file. The routine has statements such as:
source = source.Replace("<Database>db1a</Database>", "<Database>{{DatabaseName}}</Database>");
The final intermediate file is then reencoded to create a new .webtest file. Just as before the contents of the StringHttpBody fields are found and replaced with the result of calling a routine. The encoding routine is:
private string EncodeBody(string source) {
StringBuilder sb = new StringBuilder();
byte[] outBytes = new byte[2 * source.Length];
for (int ix = 0; ix < source.Length; ix++) {
char ch = source[ix];
outBytes[2 * ix] = (byte)(((int)ch) & 0xFF);
outBytes[2 * ix + 1] = (byte)((((int)ch) / 256) & 0xFF);
}
sb.Append(Convert.ToBase64String(outBytes));
return sb.ToString();
}
The flow of files is thus:
decode original.webtest > intermediate1
parameterise intermediate1 > intermediate2
encode intermediate2 > final.webtest
On the small number of .webtest files I have tried the encode operation is the inverse of the decode operation, the original file from before decoding is identical to the file after encoding. Having the two intermediate files allows easy checking and searching of the contents of the unencoded file and the effect of the parameterise step.

For
[StringHttpBody ContentType="application/json"]
To Decode the body:
var encodedString = childNode.InnerText;
var encodedStringBytes = Convert.FromBase64String(encodedString);
var decodedString = Encoding.Unicode.GetString(encodedStringBytes);
JObject jsonString =JsonConvert.DeserializeObject(decodedString);
To Encode the body:
childNode.InnerText =
Convert.ToBase64String(Encoding.Unicode.GetBytes(JsonConvert.SerializeObject(jsonString)));
I think this might help.

Related

Coldfusion 2018 No SOF segment Multi-Page TIFF Error

I have a new CF18 server and I'm getting some errors reading and converting some old images that were readable on my previous CF11 server. FYI GetReadableImageFormats results in "BMP,GIF,JPEG,JPEG 2000,JPEG2000,JPG,PNG,PNM,RAW,TIF,TIFF,WBMP"
Normally I read the files as a Binary and put it into memory for manipulation
<cffile action="readBinary" file="#file_location#" variable="binImage" />
<cfimage action="read" source="#binImage#" name="objImage" isbase64="no">
This now results in an error:
"An exception occurred while trying to read the image. No SOF segment in stream"
Reading the file with action="read" and dumping the left(binImage, 999) results:
"...2015:10:07 17:46:58 Kofax standard Multi-Page TIFF Storage Filter v3.03.000,..."
Then I tried reading it into java using:
<cfset tifFileName="#file_location#">
<cfscript>
ss = createObject("Java","com.sun.media.jai.codec.FileSeekableStream").init(tifFileName);
//create JAI ImageDecoder
decoder = createObject("Java","com.sun.media.jai.codec.ImageCodec").createImageDecoder("tiff", ss, JavaCast("null",""));
</cfscript>
Which yields an error:
"Decoding of old style JPEG-in-TIFF data is not supported."
I found this...
Decoding of old style JPEG-in-TIFF data is not supported
Do you think using TwelveMonkeys ImageIO the best path to follow for my issue?
UPDATE: Based on the suggestion that there is an invalid marker 0xFF9E I tried the following:
<cffile action="readBinary" file="#file_location#" variable="binImage" />
<cfset hexEncoding = binaryEncode(binImage, "hex")>
<cfset new_hexEncoding = replaceNoCase(hexEncoding, 'FF9E', 'FFE9', 'ALL')>
<cfset binImage = binaryDecode(new_hexEncoding, "hex")>
isImage(binImage) returns "NO" and the "No SOF segment in stream" error persists. I looped over the hexEncoding and found the FF9E string 23x. I've never edited raw image code so I'm not sure my replace is correct.
Edit: At this point I'm fairly certain my Search and Replace hexEncoding, 'FF9E', 'FFE9' logic is flawed. there is no occurance of 0xff9e in the binaryEncoded binImage.
This was driving me nuts. I tried everything I could find short of installing extra JAVA libraries or routing it through other executables to make the conversion. In my case there is only one JPEG in a TIFF, so I wrote something that literally grabs the binary data for the JPEG out of the TIFF (doesn't account for pages) and serves it up. Once you have the binary of the JPEG you can write it to a file, do conversions on it, even stream it direct to the browser. Here ya go future people who need this. I didn't write it to do pages or detect what kind of tiff it is since for my uses I already know all that. These things are .bin files, but they are all the same single page jpeg in a tiff and I needed a way to serve them up quickly in a format that browsers don't hate. This runs fast enough to be served up on the fly. Is there a better way? Probably, but this works, self contained, copy and paste, and makes complete sense to anyone that needs to edit it.
<cfscript>
strFileName = "test.tiff";
blnOutputImageToBrowser = true;
blnSaveToFile = true;
strSaveFile = GetDirectoryFromPath(GetCurrentTemplatePath())&"test.jpg"
imgByteArray = FileReadBinary(strFileName);
//Convert to HEX String
hexString = binaryEncode(imgByteArray,"hex"); //Convert binary to HEX String, so we can pattern search it
//Set HEX Length
hexLength = arraylen(imgByteArray);
//Find Start of JPG Data in HEX String
jpegStartHEX = find("FFD8FF",hexString);
jpegStartBIN = (jpegStartHEX-1)/2; //-1 because CF arrays start on 1 and everyone else starts on 1. /2 because the HEX string positions are double the byte array positions
objByteBuffer = CreateObject("java","java.nio.ByteBuffer"); //Init JAVA byte buffer class for us to use later (this makes it go faster than trying to convert the hex string back to binary)
//Find Stop of JPG Data
jpegStopHEX = 0;
jpegStopBIN = 0;
intSearchIDX = jpegStartHEX+6; //Might as well start after the JPEG start block
blnStop = false;
while (intSearchIDX < len(hexString) && jpegStopHEX == 0 && !blnStop) {
newIDX = find("FFD9",hexString,intSearchIDX);
if (newIDX == 0) {
blnStop=true;
}
else {
if (newIDX%2 == 0) { //bad search try again (due to indexing in CF starting on 1 instead of 0, the even numbers are in between hex code [they are pairs like 00 and FF])
intSearchIDX = newIDX+1;
}
else { //Found ya
jpegStopHEX = newIDX;
blnStop=true;
}
}
}
jpegStopBIN = (jpegStopHEX-1)/2; //-1 because CF arrays start on 1 and everyone else starts on 1. /2 because the HEX string positions are double the byte array positions
//Dump JPG Binary into ByteArray from the start and stop positions we discovered
jpegLengthBIN = jpegStopBIN+2-jpegStartBIN;
objBufferImage = objByteBuffer.Allocate(JavaCast( "int", jpegLengthBIN ));
objBufferImage.Put(imgByteArray,JavaCast( "int", jpegStartBIN ),JavaCast( "int", jpegLengthBIN ));
if (blnSaveToFile) { //Dump byte array into test file
fileWrite(strSaveFile,objBufferImage.Array());
}
if (blnOutputImageToBrowser) {
img = ImageNew( objBufferImage.Array() );
ImageResize(img,"1200","","highestPerformance"); //Because we might as well show an example of resizing
outputImage(toBinary(toBase64(img))); //You could skip loading the byte array as an image object and just plop the binary in directly if you don't need to manipulate it any
}
</cfscript>
<cffunction name="outputImage" returntype="void">
<cfargument name="binInput" type="binary">
<cfcontent variable="#binInput#" type="image/png" reset="true" />
<cfreturn>
</cffunction>

string in webclient class c# changed to uncorrect format

I call my service in wcf as you can see :
ClientRequest.Headers["Content-type"] = "application/json";
string result = ClientRequest.DownloadString(ServiceHostName + "/NajaService.svc/GetCarData/" + plaque);
var javascriptserializer = new JavaScriptSerializer();
return javascriptserializer.Deserialize<NajaResult>(result);
But the returned data is like this :
{"CarColor":"آبي سير","CarModel":"1383","CarTip":"ال ايکس","CarType":"سواري","Chassis":"83844131","Family":"####","FuelType":1,"MotorNum":"12483068683","Name":"####","NationalCode":"0000000000","Plaque":"11-426د61","PlaqueCoded":110561426,"PlaqueType":"","SystemType":"سمند","VinNo":"IRFC831V3GJ844131"}
I converted it to UTF8 byte and again convert it to utf8 string but not solved.
The encoded data is in Persian language .
I traced the request in fiddler and i found that the data is come with the correct format as you can see ,But in my code is changed
The WebRequest contains the Encoding property you can set up before downloading the service reply. Details are here: https://msdn.microsoft.com/en-us/library/system.net.webclient.encoding(v=vs.110).aspx

Ionic 2 / cordova-plugin-file File.writeFile() refuses to create binary file correctly (png image)

In summary
File.writeFile() creates a PNG file of 0 bytes when trying to write a Blob made from base64 data.
In my application, I am trying to create a file that consists of base64 data stored in the db. The rendered equivalent of the data is a small anti-aliased graph curve in black on a transparent background (never more that 300 x 320 pixels) that has previously been created and stored from a canvas element. I have independently verified that the stored base64 data is indeed correct by rendering it at one of various base64 encoders/decoders available online.
Output from "Ionic Info"
--------------------------------
Your system information:
Cordova CLI: 6.3.1
Gulp version: CLI version 3.9.1
Gulp local:
Ionic Framework Version: 2.0.0-rc.2
Ionic CLI Version: 2.1.1
Ionic App Lib Version: 2.1.1
Ionic App Scripts Version: 0.0.39
OS:
Node Version: v6.7.0
--------------------------------
The development platform is Windows 10, and I've been testing directly on a Samsung Galaxy S7 and S4 so far.
I know that the base64 data has to be converted into binary data (as a Blob) first, as File does not yet support writing base64 directly in to an image file. I found various techniques with which to do this, and the code which seems to suit my needs the most (and reflects a similar way I would have done it in java is illustrated below):
Main code from constructor:
this.platform.ready().then(() => {
this.graphDataService.getDataItem(this.job.id).then((data) =>{
console.log("getpic:");
let imgWithMeta = data.split(",")
// base64 data
let imgData = imgWithMeta[1].trim();
// content type
let imgType = imgWithMeta[0].trim().split(";")[0].split(":")[1];
console.log("imgData:",imgData);
console.log("imgMeta:",imgType);
console.log("aftergetpic:");
// this.fs is correctly set to cordova.file.externalDataDirectory
let folderpath = this.fs;
let filename = "dotd_test.png";
File.resolveLocalFilesystemUrl(this.fs).then( (dirEntry) => {
console.log("resolved dir with:", dirEntry);
this.savebase64AsImageFile(dirEntry.nativeURL,filename,imgData,imgType);
});
});
});
Helper to convert base64 to Blob:
// convert base64 to Blob
b64toBlob(b64Data, contentType, sliceSize) {
//console.log("data packet:",b64Data);
//console.log("content type:",contentType);
//console.log("slice size:",sliceSize);
let byteCharacters = atob(b64Data);
let byteArrays = [];
for (let offset = 0; offset < byteCharacters.length; offset += sliceSize) {
let slice = byteCharacters.slice(offset, offset + sliceSize);
let byteNumbers = new Array(slice.length);
for (let i = 0; i < slice.length; i++) {
byteNumbers[i] = slice.charCodeAt(i);
}
let byteArray = new Uint8Array(byteNumbers);
byteArrays.push(byteArray);
}
console.log("size of bytearray before blobbing:", byteArrays.length);
console.log("blob content type:", contentType);
let blob = new Blob(byteArrays, {type: contentType});
// alternative way WITHOUT chunking the base64 data
// let blob = new Blob([atob(b64Data)], {type: contentType});
return blob;
}
save the image with File.writeFile()
// save the image with File.writeFile()
savebase64AsImageFile(folderpath,filename,content,contentType){
// Convert the base64 string in a Blob
let data:Blob = this.b64toBlob(content,contentType,512);
console.log("file location attempt is:",folderpath + filename);
File.writeFile(
folderpath,
filename,
data,
{replace: true}
).then(
_ => console.log("write complete")
).catch(
err => console.log("file create failed:",err);
);
}
I have tried dozens of different decoding techniques, but the effect is the same. However, if I hardcode simple text data into the writeFile() section, like so:
File.writeFile(
folderpath,
"test.txt",
"the quick brown fox jumps over the lazy dog",
{replace: true}
)
A text file IS created correctly in the expected location with the text string above in it.
However, I've noticed that whether the file is the 0 bytes PNG, or the working text file above, in both cases the ".then()" consequence clause of the File Promise never fires.
Additionally, I swapped the above method and used the Ionic 2 native Base64-To-Gallery library to create the images, which worked without a problem. However, having the images in the user's picture gallery or camera roll is not an option for me as I do not wish to risk a user's own pictures while marshalling / packing / transmitting / deleting the data-rendered images. The images should be created and managed as part of the app.
User marcus-robinson seems to have experienced a similar issue outlined here, but it was across all file types, and not just binary types as seems to be the case here. Also, the issue seems to have been closed:
https://github.com/driftyco/ionic/issues/5638
Anybody experiencing something similar, or possibly spot some error I might have caused? I've tried dozens of alternatives but none seem to work.
I had similar behaviour saving media files which worked perfectly on iOS. Nonetheless, I had the issue of 0 bytes file creation on some Android devices in release build (dev build works perfectly). After very long search, I followed the following solution
I moved the polyfills.js script tag to the top of the index.html in the ionic project before the cordova.js tag. This re-ordering somehow the issue is resolved.
So the order should look like:
<script src="build/polyfills.js"></script>
<script type="text/javascript" src="cordova.js"></script>
Works on ionic 3 and ionic 4.
The credits go to 1
I got that working with most of your code:
this.file.writeFile(this.file.cacheDirectory, "currentCached.jpeg", this.b64toBlob(src, "image/jpg", 512) ,{replace: true})
The only difference i had was:
let byteCharacters = atob(b64Data.replace(/^data:image\/(png|jpeg|jpg);base64,/, ''));
instead of your
let byteCharacters = atob(b64Data);
Note: I did not use other trimming etc. like those techniques you used in your constructor class.

How to support Chinese in http request body?

URL = http://example.com,
Header = [],
Type = "application/json",
Content = "我是中文",
Body = lists:concat(["{\"type\":\"0\",\"result\":[{\"url\":\"test.cn\",\"content\":\"", unicode:characters_to_list(Content), "\"}]}"]),
lager:debug("URL:~p, Body:~p~n", [URL, Body]),
HTTPOptions = [],
Options = [],
Response = httpc:request(post, {URL, Header, Type, Body}, HTTPOptions, Options),
The http request body received by http server is not 我是中文。How do I fix this issue?
Luck of the Encoding
You must take special care to ensure input is what you think it is because it may differ from what you expect.
This answer applies to the Erlang release that I'm running which is R16B03-1. I'll try to get all of the details in here so you can test with your own install and verify.
If you don't take specific action to change it, a string will be interpreted as follows:
In the Terminal (OS X 10.9.2)
TerminalContent = "我是中文",
TerminalContent = [25105,26159,20013,25991].
In the terminal the string is interpreted as a list of unicode characters.
In a Module
BytewiseContent = "我是中文",
BytewiseContent = [230,136,145,230,152,175,228,184,173,230,150,135].
In a module, the default encoding is latin1 and strings containing unicode characters are interpreted bytewise lists (of UTF8 bytes).
If you use data encoded like BytewiseContent, unicode:characters_to_list/1 will double-encode the Chinese characters and ææ¯ä will be sent to the server where you expected 我是中文.
Solution
Specify the encoding for each source file and term file.
If you run an erl command line, ensure it is setup to use unicode.
If you read data from files, translate the bytes from the bytewise encoding to unicode before processing (this goes for binary data acquired using httpc:request/N as well).
If you embed unicode characters in your module, ensure that you indicate as much by commenting within the first two lines of your module:
%% -*- coding: utf-8 -*-
This will change the way the module interprets the string such that:
UnicodeContent = "我是中文",
UnicodeContent = [25105,26159,20013,25991].
Once you have ensured that you are concatenating characters and not bytes, the concatenation is safe. Don't use unicode:characters_to_list/1 to convert your string/list until the whole thing has been built up.
Example Code
The following function works as expected when given a Url and a list of unicode character Content:
http_post_content(Url, Content) ->
ContentType = "application/json",
%% Concat the list of (character) lists
Body = lists:concat(["{\"content\":\"", Content, "\"}"]),
%% Explicitly encode to UTF8 before sending
UnicodeBin = unicode:characters_to_binary(Body),
httpc:request(post,
{
Url,
[], % HTTP headers
ContentType, % content-type
UnicodeBin % the body as binary (UTF8)
},
[], % HTTP Options
[{body_format,binary}] % indicate the body is already binary
).
To verify results I wrote the following HTTP server using node.js and express. The sole purpose of this dead-simple server is to sanity check the problem and solution.
var express = require('express'),
bodyParser = require('body-parser'),
util = require('util');
var app = express();
app.use(bodyParser());
app.get('/', function(req, res){
res.send('You probably want to perform an HTTP POST');
});
app.post('/', function(req, res){
util.log("body: "+util.inspect(req.body, false, 99));
res.json(req.body);
});
app.listen(3000);
Gist
Verifying
Again in Erlang, the following function will check to ensure that the HTTP response contains the echoed JSON, and ensures the exact unicode characters were returned.
verify_response({ok, {{_, 200, _}, _, Response}}, SentContent) ->
%% use jiffy to decode the JSON response
{Props} = jiffy:decode(Response),
%% pull out the "content" property value
ContentBin = proplists:get_value(<<"content">>, Props),
%% convert the binary value to unicode characters,
%% it should equal what we sent.
case unicode:characters_to_list(ContentBin) of
SentContent -> ok;
Other ->
{error, [
{expected, SentContent},
{received, Other}
]}
end;
verify_response(Unexpected, _) ->
{error, {http_request_failed, Unexpected}}.
The complete example.erl module is posted in a Gist.
Once you've got the example module compiled and an echo server running you'll want to run something like this in an Erlang shell:
inets:start().
Url = example:url().
Content = example:content().
Response = example:http_post_content(Url, Content).
If you've got jiffy set up you can also verify the content made the round trip:
example:verify_response(Response, Content).
You should now be able to confirm round-trip encoding of any unicode content.
Translating Between Encodings
While I explained the encodings above you will have noticed that TerminalContent, BytewiseContent, and UnicodeContent are all lists of integers. You should endeavor to code in a manner that allows you to be certain what you have in hand.
The oddball encoding is bytewise which may turn up when working with modules that are not "unicode aware". Erlang's guidance on working with unicode mentions this near the bottom under the heading Lists of UTF-8 Bytes. To translate bytewise lists use:
%% from http://www.erlang.org/doc/apps/stdlib/unicode_usage.html
utf8_list_to_string(StrangeList) ->
unicode:characters_to_list(list_to_binary(StrangeList)).
My Setup
As far as I know, I don't have local settings that modify Erlang's behavior. My Erlang is R16B03-1 built and distributed by Erlang Solutions, my machine runs OS X 10.9.2.

How can I add characters at the end of my xml response?

I have a restful web service that's returning results like this:
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">Some Text</string>
However, the people on the receiving end need this text to be terminated w/ a special character such as "\r". How can I add that text to the end of my serialized response?
I'm sending this response from inside of a WCF service in C# like this:
[WebGet(UriTemplate = "/MyMethod?x={myId}"), OperationContract]
string GetSomeText(Guid myId);
I can think of three solutions:
1. Http Module (least code but most confusing for maintenance)
Assuming you're hosting your WCF in ASP.Net, you can create an Http module to add a \r to the end of all responses in your application.
This could be the code of the Http module. I've used 'End' as a suffix here because it's easier to read in a browser than \r, but for \r you would change the "End" in context_PostRequestHandlerExecute to "\r".
public class SuffixModule : IHttpModule
{
private HttpApplication _context;
public void Init(HttpApplication context)
{
_context = context;
_context.PostRequestHandlerExecute += context_PostRequestHandlerExecute;
}
void context_PostRequestHandlerExecute(object sender, EventArgs e)
{
// write the suffix if there is a body to this request
string contentLengthHeaderValue = _context.Response.Headers["Content-length"];
string suffix = "End";
if (!String.IsNullOrEmpty(contentLengthHeaderValue))
{
// Increase the content-length header by the length of the suffix
_context.Response.Headers["Content-length"] =
(int.Parse(contentLengthHeaderValue) + suffix.Length)
.ToString();
// and write the suffix!
_context.Response.Write(suffix);
}
}
public void Dispose()
{
// haven't worked out if I need to do anything here
}
}
Then you need to set up your module up in your web.config. The below assumes you have IIS running in Integrated Pipeline mode. If you haven't, you need to register the modules in the <system.web><httpModules> section.
<system.webServer>
<modules runAllManagedModulesForAllRequests="true">
<!-- 'type' should be the fully-qualified name of the type,
followed by a comma and the name of the assembly-->
<add name="SuffixModule" type="WcfService1.SuffixModule,WcfService1"/>
</modules>
</system.webServer>
This option has the problems that it would affect all requests in your application by default and it would probably fall over if you decided to use chunked encoding.
2. Use ASP.NET MVC (changes technology but good maintainability)
Use MVC instead of WCF. You'd have far better control over your output.
3. Custom Serializer (lots of code, but less hacky than option 1)
You could write your own custom serializer. This StackOverflow question gives you pointers on how to do this. I didn't write a prototype for this because it looked as though there were many, many methods which needed to be overridden. I daresay most of them would be pretty simple delegations to the standard serializer.