I'm learning how to hack PS3 games, where I need to edit what is stored at certain memory addresses. Here is an example of what I see people do to to achieve this:
*(char*)0x1786418 = 0x40;
This line of code turns on super speed for COD Black Ops II.
I'm not 100% sure what is going on here. I know that 0x1786418 is the address and 0x40 sets the value at that address. But I'm not so sure what *(char*) does and how does 0x40 turn on super speed?
An explanation of this syntactically and how it turns on super speed would be much appreciated.
Thanks!
You should consider understanding the basics of the programming language before you try to go into reverse-engineering. That's definitely an advanced topic that you don't want to use as a way to get started. It'll make things unnecessarily more difficult for you.
I'm not 100% sure what is going on here. I know that 0x1786418 is the address and 0x40 sets the value at that address.
This is as much as anyone here might be able to tell you, unless the person who reverse-engineered the software shows up here and explains it.
But I'm not so sure what *(char*) does
This is a way to take the address and interpret it as a pointer to a byte (chars in C are 1 byte of memory) and then the outside * dereferences the pointer to allow the value referenced by the pointer to be modified, in this case, set to the value 0x40.
and how does 0x40 turn on super speed?
This is very specific to the game itself. Someone must've figured out where data about player movement speed is stored in memory (specifically for the PS3) and is updating it this way.
Something like this could easily break by a simple patch because code changes can make certain things end up at different addresses, requiring additional reverse-engineering efforts.
if anyone is seing this and wants to know how to set prestiges or enable red boxes or what not ill explain how (MW3 Will be my example)
so for a prestige it would be like *(char*)0x01C1947C = 20;
that would set prestige 20 if you dont understand for 20 you can either do 20 or 0x14 would also equal prestige 20 say you want prestige 12 you could do 12 or 0xC
if you dont know how just search prestige you want then in hex :)
now for stuff like red boxes (assuming you know about bools / if statements and voids im not going to cover them only how you would set it)
now you would do for red boxes (enable) ps. bytesOn can be called anything
char bytesOn[] = { 0x60, 0x00, 0x00, 0x00 };
write_process(0x65D14, bytesOn, sizeof(bytesOn));
whateverYourBoolIsCalled = true;
now to turn it off works the same except you have to get the other bytes :)
ill add 1 more example if you want to set a name in a sprx
char name[] = { "name here" };
write_process(0x01BBBC2C, name, sizeof(name));
there is shorter ways of doing this but i think its the best way to understand doing it this way :)
so ye this has been my tut :)
Related
In Visual Studio 15, I am pulling up the memory window using Degug->Memory->Memory 1. In this window, I can type in either an address or an in-scope pointer while debugging to view the contents at that memory.
For instance:
int *p; //doesn't really matter what p is, but rather what it points to
*p = 5;
In the console, I can type 'p' and it will bring up a memory table showing 0xaabbccdd: 05 00 00 00 ...
I am working on a project which requires precise manipulation of values at memory locations, so I need to be efficient at reading these values; however, the current way in which they are displayed makes them very difficult to read. Normally, I would expect to read 5 in hexadecimal as 0x00000005, but in this format, it is much more foreign to me: the four sections are ordered in Big Endian, not Little Endian, and they are also reversed within each section. So for a more comprehensive example, *p = 0x12345678 becomes 0xaabbccdd: 21 43 65 87 and that is incredibly cumbersome to read. Is there a way to change the format of this in Visual Studio 15?
On the context menu for the memory window you can choose the units that bytes are grouped by. Personally I generally prefer the locals and watch window, the watch window in particular allows a great deal of control over how items are displayed. See https://msdn.microsoft.com/en-us/library/75w45ekt.aspx for details on that. You can also customize how types are displayed by creating a native visualization file, see https://msdn.microsoft.com/en-us/library/jj620914.aspx
If I had an old PC game that has certain variables that cannot exceed 255 without crashing, would it be possible to convert ALL 8bit integers into 16bit integers by modifying the Windows 95 executable?
The game I'm talking about is Total Annihilation from 1997. And although the game itself was way ahead of it's time and had the capabilities to be modded into epic experiences, (Hell, the game was so ahead of it's time, the data files use JSON-like syntax... The game also supports 4K and looks amazing still.) there is unfortunately a limit to the total number of weapons in the game. All weapons have IDs, and the max ID of a weapon is 255 as can be seen below:
[NUCLEAR_MISSILE]
{
ID=122;
name=Nuclear Missile;
rendertype=1;
lineofsight=1;
vlaunch=1;
model=ballmiss;
range=32000;
reloadtime=180;
noautorange=1;
weapontimer=5;
flighttime=400;
weaponvelocity=350;
weaponacceleration=50;
turnrate=32768;
areaofeffect=512;
edgeeffectiveness=0.25;
energypershot=180000;
metalpershot=2000;
stockpile=1;
targetable=1;
commandfire=1;
cruise=1;
soundstart=misicbm1;
soundhit=xplomed4;
firestarter=100;
smokedelay=.1;
selfprop=1;
smoketrail=1;
propeller=1;
twophase=1;
guidance=1;
tolerance=4000;
shakemagnitude=24;
shakeduration=1.5;
explosiongaf=commboom;
explosionart=commboom;
waterexplosiongaf=fx;
waterexplosionart=h2oboom2;
lavaexplosiongaf=fx;
lavaexplosionart=lavasplashlg;
startsmoke=1;
[DAMAGE]
{
default=5500;
ARMCOM=2900;
CORCOM=2900;
}
}
Would this be worth it at all to attempt? I'm not very familiar with Assembly language, but I've heard that with C++ you sometimes have to write your own assembly language in certain instances back in the day.
All I want to do is just bump up all 8bit Ints to 16bit by editing the .EXE, how difficult would this be to pull off?
All I want to do is just bump up all 8bit Ints to 16bit by editing the .EXE, how difficult would this be to pull off?
Essentially impossible without access to the source code. Replacing an 8-bit integer with a 16-bit one would change the size and layout of the data structure which contained it. Any code which "touched" those objects, or any objects which contained them, would need to be updated. Identifying that code would be an extensive project -- in all probability, it'd require most of the game to be manually decompiled to C source code.
I got point sprites working almost immediately, but I'm only stuck on one thing, they are rendered as probably 2x2 pixel sprites, which is not really very easy to see, especially if there's other motion. Now, I've tried tweaking all the variables, here's the code that probably works best:
void renderParticles()
{
for(int i = 0; i < particleCount; i ++)
{
particlePoints[i] += particleSpeeds[i];
}
void* data;
pParticleBuffer->Lock(0, particleCount*sizeof(PARTICLE_VERTEX), &data, NULL);
memcpy(data, particlePoints, sizeof(particlePoints));
pParticleBuffer->Unlock();
pd3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_ZWRITEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSPRITEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSCALEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSIZE, (DWORD)1.0f);
//pd3dDevice->SetRenderState(D3DRS_POINTSIZE_MAX, (DWORD)9999.0f);
//pd3dDevice->SetRenderState(D3DRS_POINTSIZE_MIN, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_A, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_B, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_C, (DWORD)1.0f);
pd3dDevice->SetStreamSource(0, pParticleBuffer, 0, sizeof(D3DXVECTOR3));
pd3dDevice->DrawPrimitive(D3DPT_POINTLIST, 0, particleCount);
pd3dDevice->SetRenderState(D3DRS_POINTSPRITEENABLE, FALSE);
pd3dDevice->SetRenderState(D3DRS_POINTSCALEENABLE, FALSE);
}
Ok, so when I change POINTSCALE_A and POINTSCALE_B, nothing really changes much, same for C. POINTSIZE also makes no difference. When I try to assign something to POINTSIZE_MAX and _MIN, no matter what I assign, it always stops the rendering of the sprites. I also tried setting POINTSIZE with POINTSCALEENABLE set to false, no luck there either.
This looks like something not many people who looked around found an answer to. An explanation of the mechanism exists on MSDN, while, yes, I did check stackoverflow and found a similar question with no answer. Another source only suggested seting the max and min variables, which as I said, are pretty much making my particles disappear.
ParticlePoints and particleSpeeds are D3DXVector3 arrays, and I get what I expect from them. A book I follow suggested I define a custom vertex with XYZ and diffuse but I see no reason for this to be honest, it just adds a lot more to a long list of declarations.
Any help is welcome, thanks in advance.
Edit: Further tweaking showed than when any of the scale values are above 0.99999997f (at least between that and 0.99999998f I see the effect), I get the tiny version, if I put them there or lower I pretty much get the size of the texture - though that is still not really that good as it may be large, and it pretty much fails the task of being controllable.
Glad to help :) My comment as an answer:
One more problem that I've seen is you float to dword cast. The official documentation suggests the following conversion *((DWORD*)&Variable (doc) to be put into SetRenderState. I'm not very familiar with C++, but I would assume that this makes a difference, because your cast sets a real dword, but the API expects a float in the dwords memory space.
I'm using CoreAudio to play some continuous sound. I managed to get it work, however I have a problem now that I can't overcome. The sound it's playing, more that that it's the actual sound I need, not just noise, but together with it I get noise, hiss, pops as well.
I verified the sample rate, zero-ed out all the silence buffers, checked the channels (I'm positive I only have 1) and double checked the algorithm that feeds the playback method.(but I'll add it here just to be sure). My experience with sound it's slim, so probably I'm doing something very terrible wrong. I would like to know if there are other things to check or what's the best approach on this, where to look first?
//init
playedBufferSize=audioFilesSize[audioFilesIndex];
startPointForPlayedBuffer=0;
//feed the audio
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioBuffer buffer = ioData->mBuffers[0];
if (playedBufferSize>=buffer.mDataByteSize) {
memcpy(buffer.mData , audioFiles[audioFilesIndex]+startPointForPlayedBuffer, buffer.mDataByteSize);
playedBufferSize-=buffer.mDataByteSize;
startPointForPlayedBuffer+=buffer.mDataByteSize;
}else {
memcpy(buffer.mData , audioFiles[audioFilesIndex]+startPointForPlayedBuffer, playedBufferSize);
nextAudioFileIndex();
memcpy(buffer.mData+playedBufferSize, audioFiles[audioFilesIndex], playedBufferSize);
playedBufferSize = audioFilesSize[audioFilesIndex]-(buffer.mDataByteSize-playedBufferSize);
startPointForPlayedBuffer = (buffer.mDataByteSize-playedBufferSize);
}
return noErr;
}
EDIT: I know that this code above won't play the sound continously because it fills the buffer with a bunch of 0's at some point, however, I get many strange sounds along with that, if the sound would play and stop for a short while and start again I would be happy, a good start :)
EDIT2: I edited the code so that it won't output silence anymore, still I get the hiss and pops unfortunately...
Thanks!
I'm not completely familiar with what you're doing but I had a similar issue using Core Graphics stuff on OSX - where I was getting visible "noise" on my images in certain situations. The issue there was with my buffers, I had to actually zero them out or else I would get noise on them. Can you try doing a memset on buffer.mData[] before using it?
The issue that came in to play, and why I think you may be seeing the same type of thing, is that when you allocate large chunks of memory in OSX it's typically zero'd for security reasons, but for small pieces of memory it won't be zero'd out. That can lead to strange bugs - i.e. at first you may be allocating a large enough piece of memory that it's cleared for you, but as you continue your streaming you may be allocating smaller pieces that aren't cleared.
I'm currently writing a c++ program which should write me a png file as output. So I made a little code, actually works. I just took the source code from here and condesed it. My code is nopasted here.
BUT: It only works if it doesn't exceed 1002 pixels in width. I am very sure the problem is somewhere around lines 29/30, so a malloc problem, but I don't get it.
Thanks for your help & greez
Without diving into the code too deeply, there are these interesting constants:
unsigned width = 1003;
unsigned height = 500;
int rowbytes = 4000;
The last one directly controls the amount of memory allocated. Have you tried increasing this value?