Found Old D Code That Needs Update - d

While digging around the internet in search of a tool that would allow me to extract sound files from a Java container, I found a 4+ year old bit of D code. Unfortunately, I couldn't get it to compile, and discovered upon searching Google that std.stream had been deprecated. I have never worked with D before now, however, and I can't find any clear-cut ideas on how to bring this pre-stream-death code up to speed. Are there any current D programmers who can help me out?
EDIT: I have picked up the undead src files stream and related files, but I am now having an int property issue. According to the forum where I found this code, it worked at the time of posting 4 years ago so I can't imagine that the 'files.sort' is inherently problematic.
import std.stdio;
import std.file;
import std.conv;
import std.stream;
import std.bitmanip;
void main(string args[]){
Stream pkfile = new BufferedFile(args[1],FileMode.In);
int headersize;
int counter = 0;
int filecount;
int throwaway;
int offset;
pkfile.read(headersize);
pkfile.read(filecount);
writefln("Count is : " ~ to!string(filecount));
int[] files = new int[filecount];
for(int x = 0; x < filecount; x++)
{
pkfile.read(throwaway);
pkfile.read(offset);
files[x] = offset + (headersize + 4);
}
files = files.sort;
for(int x =0; x < filecount; x++)
{
pkfile.seekSet(cast(long)files[x]);
long len;
pkfile.read(len);
int newlen = cast(int)len;
ubyte[] ogg = new ubyte[newlen];
pkfile.read(ogg);
string outname = getcwd() ~"/"~ to!string(counter) ~ ".ogg";
Stream oggout = new BufferedFile(outname,FileMode.OutNew);
oggout.write(shiftL(ogg,newlen));
oggout.close();
writefln("Creating file " ~ to!string(counter) ~".ogg");
counter++;
}
}
ubyte[] shiftL(ubyte[] ogg,int ogglength){
ubyte[] temp = new ubyte[ogglength];
for(int x = 0;x < ogglength;x++){
temp[x] = cast(byte)(ogg[x]-1);
}
return temp;
}

The easiest way is just to grab a copy of std.stream.
But for rewriting, I'd read the full file into memory and then slice it as you read bits of it.

Related

Reading/writing binary file returns 0xCCCCCCCC

I have a script that dumps class info into a binary file, then another script that retrieves it.
Since binary files only accept chars, I wrote three functions for reading and writing Short Ints, Ints, and Floats. I've been experimenting with them so they're not overloaded properly, but they all look like this:
void writeInt(ofstream& file, int val) {
file.write(reinterpret_cast<char *>(&val), sizeof(val));
}
int readInt(ifstream& file) {
int val;
file.read(reinterpret_cast<char *>(&val), sizeof(val));
return val;
}
I'll put the class load/save script at the end of the post, but I don't think it'll make too much sense without the rest of the class info.
Anyway, it seems that the file gets saved properly. It has the correct size, and all of the data matches when I load it. However, at some point in the load process, the file.read() function starts returning 0xCCCCCCCC every time. This looks to me like a read error, but I'm not sure why, or how to correct it. Since the file is the correct size, and I don't touch the seekg() function, it doesn't seem likely that it's reaching the end of file prematurely. I can only assume it's an issue with my read/write method, since I did kind of hack it together with limited knowledge. However, if this is the case, why does it read all the data up to a certain point without issue?
The error starts occurring at a random point each run. This may or may not be related to the fact that all the class data is randomly generated.
Does anyone have experience with this? I'm not even sure how to continue debugging it at this point.
Anyway, here are the load/save functions:
void saveToFile(string fileName) {
ofstream dataFile(fileName.c_str());
writeInt(dataFile, inputSize);
writeInt(dataFile, fullSize);
writeInt(dataFile, outputSize);
// Skips input nodes - no data needs to be saved for them.
for (int i = inputSize; i < fullSize; i++) { // Saves each node after inputSize
writeShortInt(dataFile, nodes[i].size);
writeShortInt(dataFile, nodes[i].skip);
writeFloat(dataFile, nodes[i].value);
//vector<int> connects;
//vector<float> weights;
for (int j = 0; j < nodes[i].size; j++) {
writeInt(dataFile, nodes[i].connects[j]);
writeFloat(dataFile, nodes[i].weights[j]);
}
}
read(500);
}
void loadFromFile(string fileName) {
ifstream dataFile(fileName.c_str());
inputSize = readInt(dataFile);
fullSize = readInt(dataFile);
outputSize = readInt(dataFile);
nodes.resize(fullSize);
for (int i = 0; i < inputSize; i++) {
nodes[i].setSize(0); // Sets input nodes
}
for (int i = inputSize; i < fullSize; i++) { // Loads each node after inputSize
int s = readShortInt(dataFile);
nodes[i].setSize(s);
nodes[i].skip = readShortInt(dataFile);
nodes[i].value = readFloat(dataFile);
//vector<int> connects;
//vector<float> weights;
for (int j = 0; j < nodes[i].size; j++) {
nodes[i].connects[j] = readInt(dataFile); //Error occurs in a random instance of this call of readInt().
nodes[i].weights[j] = readFloat(dataFile);
}
read(i); //Outputs class data to console
}
read(500);
}
Thanks in advance!
You have to check the result of open, read, write operations.
And you need to open files (for reading and writing) as binary.

I can't seem to print several lines of data in qt creator. Program overwrites all output except for the last line

My intended output: To print the time and number of bacteria for an exponential equation. I'm trying to print every data point up until time t, for instance if I'm finding the growth up until 50 hours in, I want to print the number of bacteria at time 0, 1, 2, ..., 49, 50. I am trying to have each output on a new line as well.
So here is my code:
void MainWindow::on_pushButtonCalc_clicked()
{
QString s;
double t = ui->t->text().toDouble();
double k = ui->k->text().toDouble();
double n0 = ui->n0->text().toDouble();
/*double example;
example= k;
s = s.number(example);
ui->textOutput->setText(s);*/
for(int c = 0; c<t; ++c)
{
double nt = n0*exp(k*t);
s = s.number(nt);
ui->textOutput->setText(s);
}
}
I've tried quite a few different outputs, and have also been trying to append new points to an array and print the array, but I haven't had too much luck with that either. I'm somewhat new to c++, and very new to qt.
Thank you for any suggestions.
The QTextEdit::setText function is going to replace the contents of the text edit with the parameter you pass in. Instead, you can use the append function:
for(int c = 0; c<t; ++c)
{
double nt = n0*exp(k*t);
s = QString::number(nt);
ui->textOutput->append(s);
}
Note also that since QString::number is a static function, you don't need an instance to call it.
Alternately, you can create the string in your loop and then set it to the text edit using setText:
for (int c = 0; c<t; ++c)
{
double nt = n0*exp(k*t);
s += QString("%1 ").arg(nt);
}
ui->textOutput->setText(s);

How to generate a hashmap for huge chunk of data?

I want to make a map such that a set of pointers point to arrays of dynamic size.
I did use hashing with chaining. But since data I am using it for is huge, the program give std::bad_alloc after few iterations. The reason of which may be new used to generate the linked list.
Someone please suggest which data structure shall I use?
Or anything else that can improve memory usage with my hash table?
Program is in C++.
This is what my code looks like:
Initialization of hashtable:
class Link
{
public:
double iData;
Link* pNext;
Link(double it) : iData(it)
{ }
void displayLink()
{ cout << iData << " "; }
};
class List
{
private:
Link* pFirst;
public:
List()
{ pFirst = NULL; }
void insert(double key)
{
if(pFirst==NULL)
pFirst = new Link(key);
else
{
Link* pLink = new Link(key);
pLink->pNext = pFirst;
pFirst = pLink;
}
}
};
class HashTable
{
public:
int arraySize;
vector<List*> hashArray;
HashTable(int size)
{
hashArray.resize(size);
for(int j=0; j<size; j++)
hashArray[j] = new List;
}
};
main snippet:
int t_sample = 1000;
for(int i=0; i < k; i++) // initialize random position
{
x[i] = (cal_rand() * dom_sizex); //dom_sizex = 20e-10 cal_rand() generates rand no between 0 and 1
y[i] = (cal_rand() * dom_sizey); //dom_sizey = 10e-10
}
for(int t=0; t < t_sample; t++)
{
int size;
size = cell_nox * cell_noy; //size of hash table cell_nox = 212, cell_noy = 424
HashTable theHashTable(size); //make table
int hashValue = 0;
for(int n=0; n<k; n++) // k = 10*212*424
{
int m = x[n] /cell_width; //cell_width = 4.7e-8
int l = y[n] / cell_width;
hashValue = (kx*l)+m;
theHashTable.hashArray[hashValue]->insert(n);
}
-------
-------
}
First things first, use a Standard Container. In your specific case, you might want:
either std::unordered_multimap<int, double>
or std::unordered_map<int, std::vector<double>>
(Note: if you do not have C++11, those are available in Boost)
Your main loop becomes (using the second option):
typedef std::unordered_map<int, std::vector<double>> HashTable;
for(int t = 0; t < t_sample; ++t)
{
size_t const size = cell_nox * cell_noy;
// size of hash table cell_nox = 212, cell_noy = 424
HashTable theHashTable;
theHashTable.reserve(size);
for (int n = 0; n < k; ++n) // k = 10*212*424
{
int m = x[n] / cell_width; //cell_width = 4.7e-8
int l = y[n] / cell_width;
int const cellId = (kx*l)+m;
theHashTable[cellId].push_back(n);
}
}
This will not leak memory (reliably), although of course you might have other leaks, and thus will give you a reliable baseline. It is also probably faster than your approach, with a more convenient interface, etc...
In general you should not re-invent the wheel, unless you have a specific need that is not addressed by the available wheels or you are actually trying to learn how to create a wheel or to create a better wheel.
The OS has to solve the same issues with the memory pages, maybe it's worth looking at how that is done? First of all, let's assume all pages are on the disk. A page is a fixed size memory chunk. For your use case, let's say it's an array of your records. Because RAM is limited, the OS maintains a mapping between the page number and it's location in RAM.
So, let's say your pages have 1000 records, and you want to access record 2024, you would ask the OS for page 2, and read record 24 from that page. That way, your map is only 1/1000 in size.
Now, if your page has no mapping to a memory location, then it is either on disk or has never been accessed before (is empty). Then you need to swap out another page, and load that page from disk (and update the location mapping).
This is a very simplified description of what happens and i wouldn't be surprised if someone jumps me in the neck for describing it like this.
The point is:
What does this mean for you?
First of all, your data exceeds your RAM - you won't get around writing to disk, if you don't want to try compression first.
Second, your chains can work as pages if you want, but i wonder whether just paging your hashcode would work better. What i mean is, use the upper bits as page number, and the lower bits as offset in the page. Avoiding collisions is still key, as you want to load the least pages possible. You can still chain your pages, and end up with a much smaller map.
Second - a crucial part is deciding which pages to swap out to make room for the new pages. LRU should do ok. If you can better predict which pages you will (not) need, so much better for you.
Third - you need placeholders for your pages to tell you whether they are in-memory or on disk.
Hope this helps.

ArrayOutOfBoundsException error

I'm writing a program with java that analyses stock data.
I almost got it working but now it gives me an ArrayOutOfBounds Exception.
int n = closingPrices.size();
double[][] cParray = new double[n][1];
for(int i = 0; i < n; i++)
{
cParray[i][1] = closingPrices.get(i);
}
I hope you can give me help on how to solve this problem..
the size of cParray[i] is 1. It can have only one element with the index [0]
so try cParray[i][0] = closingPrices.get(i)
OR double[][] cParray = new double[n][2]

FMod Memory Stream Problem

EDIT: Well...that's very interesting. I made settings into a pointer and passed that. Worked beautifully. So, this is solved. I'll leave it open for anyone curious to the answer.
I'm having an issue creating a sound in FMod from a memory stream. I looked at the loadfrommemory example shipped with FMod and followed that. First, the code I'm using...
CSFX::CSFX(CFileData *fileData)
{
FMOD_RESULT result;
FMOD_CREATESOUNDEXINFO settings;
settings.cbsize = sizeof(FMOD_CREATESOUNDEXINFO);
settings.length = fileData->getSize();
_Sound = 0;
std::string temp = "";
for (int i = 0; i < fileData->getSize(); i++)
temp += fileData->getData()[i];
result = tempSys->createSound(temp.c_str(), FMOD_SOFTWARE | FMOD_OPENMEMORY, &settings, &_Sound);
}
As it is like this, I get an access violation on tempSys->createSound(). I've confirmed that tempSys is valid as it works when creating sounds from a file. I've also confirmed the char * with my data is valid by writing the contents to a file, which I was then able to open in Media Player. I have a feeling there's a problem with settings. If I change that parameter to 0, the program doesn't blow up and I end up with result = FMOD_ERR_INVALID_HANDLE (which makes sense considering the 3rd parameter is 0). Any idea what I'm doing wrong?
Also, please disregard the use of std::string, I was using it for some testing purposes.
Solved by turning settings into a pointer. See code below:
CSFX::CSFX(CFileData *fileData)
{
FMOD_RESULT result;
FMOD_CREATESOUNDEXINFO * settings;
_Sound = 0;
std::string temp = "";
for (int i = 0; i < fileData->getSize(); i++)
temp += fileData->getData()[i];
settings = new FMOD_CREATESOUNDEXINFO();
settings->cbsize = sizeof(FMOD_CREATESOUNDEXINFO);
settings->length = fileData->getSize();
result = tempSys->createSound(temp.c_str(), FMOD_SOFTWARE | FMOD_OPENMEMORY, settings, &_Sound);
delete settings;
settings = 0;
}
You need to memset settings before using it.
memset(&settings, 0, sizeof(FMOD_CREATESOUNDEXINFO);
Otherwise it will contain garbage and potentially crash.