I'm playing with an old ZX Spectrum 48k and I'm wondering how exactly it is possible to enter POKE codes.
You load a game with a tape - then somehow break out of the program, type in the POKE statements, and start running the program again?
I've done a lot of searching on this but haven't been able to find exactly how this is done so any leads on this would be greatly appreciated.
First of all, the meaning of PEEK and POKE:
10 let x = PEEK 40000: REM returns (reads) the value (0-255) in position 40000
20 POKE 40000, 201: REM writes the 201 value in position 40000
Most programs loaded a small BASIC program called a loader. It was something like:
10 cls
20 print "Loading AWESOME GAME!!!"
20 load "" screen$
30 load "" code 40000
40 randomize usr 40000
The meaning should be straightforward: load a screen presentation (line 20) to keep the user entertained while the assembler program (the game itself) loads (line 30), and finally launch the game (line 40).
About line 40, usr 40000 is the expression that does the trick, calling assembly at position 40000. The instruction Randomize just initializes the random seed used by rnd, though it will actually never return.
So, the first tries would be:
Press "break" (more or less equivalent to Ctrl+C), enter list, and put the pokes in line 35, i.e., once the program has been loaded but it has not been executed yet.
Instead of typing load "" in order to launch the game, type merge "" (this was used to combine the basic program in memory with the one in tape). The process will stop before executing the loader. This is useful when the loader included a poke instruction that disabled BREAK.
The problem with these methods was that at first the attempts to hide the loader innards were naive, (such as including a PAPER 0: INK 0 instruction or something like that at line 10, making everything temporarily invisible), but soon they would get a lot more complex, up to the point to actually be an assembler program included in REM instructions.
The next step was to analyze the headers of the assembly code loaded after the basic loader, conclude the dump address and the length of the code, and create your own loader in which you could include the poke instructions you wanted. Many magazines distributed these kinds of loaders, which were intended to be loaded before the original one (the loader looked for specific blocks, bypassing the original basic loader).
So then developers decided to include the assembly blocks in tape without the headers, as well as protecting the loader. Or including a loader that just loads an assembly program that substitutes the loader in ROM, using different speeds, without header information, etc. Or including a loader that loads a headless block including the presentation screen and the code for the game.
And then special hardware such as the Multiface-1 appeared. Reading the Multiface-1 manual you can see how invoking multiface's software (included in the peripheral hardware's ROM) by pressing a red button (which provoked a NMI (not masked interruption), a menu was shown allowing you to save the memory at that point (and the saved code would be free of any protection, thus opening the path to create your own loader with pokes), or even examine (PEEK) current values at specific addresses in memory and enter POKE's directly (with which you could find the beginning of those routines, for example, that diminish your lives in one).
The POKE's instructions were usually of the kind (this is a simplification): POKE addr, 0 or POKE addr, 201. The number addr was the beginning of a routine decreasing the number of lives available, or detecting the clash with an enemy.
The code 0 is the assembly NOP (no operation) instruction. During a NOP, the CPU does nothing.
The code 201 or C9 is the assembly RET (return) instruction, which means to return from a subroutine. In BASIC, you would call a subroutine with GOSUB and return from its end with RETURN. In assembly, the same pair is CALL/RET.
If you had a 201, then it would effectively mean that a subroutine (say subtracting one from your lives) such as:
9950 let lives = lives - 1
9960 return
was transformed to:
9950 return
9960 return
If you had a 0 value the same routine it was transformed to:
9950
9960 return
The POKE codes printed in ZX Spectrum magazines usually expected you to have a plug-in hardware device (e.g. Multiface). Once the game loaded, you could press the Multiface button to halt the game, enter the POKEs, and then return back to the game.
Without a special device, you need to play with the loader programs, as described by the other answers. You need to load the initial small loader program and then BREAK into the code. If you're lucky, the code will do something simple with loading in the rest of the game and then execute the actual machine code game using the RANDOMIZE USR call. In this case you can modify the loader BASIC program to do the POKES after the game has loaded but before the game is started.
However, many games make this hard because they include custom loader code. This is often written in machine code embedded into the small BASIC program in REM statements. The machine code will load the game and execute it, and because they never return control to the BASIC code, there is no opportunity to enter the POKEs. If you are dedicated enough, you could try modifying the machine code to either return control back to BASIC so you can POKE away, or else perform the POKEs via machine code calls. This is quite hard, because if I remember rightly, the editor used to scramble lines containing non printable characters in the REM statements. There were software tools like RoyBot that could help you with modifying code in memory.
Some game developers did really crazy stuff to prevent game hacking, such as implementing loader code that actually overwrote its own code while it was being executed
Most Spectrum programs use a two step process to start a game:
Load and run a small BASIC program
That small BASIC program then loads much longer machine code and then jumps to the machine code's entry point (e.g. RANDOMIZE USR 28455).
If you can manage to stop between those steps, you can POKE around (to increase the number of lives, ...) and then start the machine code with RANDOMIZE USR 28455, assuming you somehow found out the correct address.
Once a machine program is running there is usually no way to stop it and get back to the BASIC interpreter. Unless the machine program provides some explicit (or inadvertent) way to do so.
As I recall from a long time ago.... When a Spectrum game loads, it initially loads in a small loader program, and runs that, the tape continues and the bulk of the program is loaded in. The last command in the loader program then issues a poke command which calls everything loaded and starts the game. So, as I remember, you have to pause the tape once the loader program has loaded, and stop the line of code from automatically issuing the final poke, then continues. Then once the bulk has loaded, you issue your poke from the command line, and then the original poke to start the game. The loader program will be loaded after the first set of red and blue lines, followed with the very short yellow and blue lines on the screen (as I recall it prints the name of the program found at this point). Stop the tape, press Break, then List to see the code. Best of luck and great question!
As a workarround to found the right poke, and after loaded and BREAK the program, you can search for commands like:
LD A,3
In a game with 3 lives at the start. The code in HEX for this command is:
3E 03 -> in hex
62 3 -> in decimal
Search this data and change the 03 to 255 for example (255 is the max allowebd value). Then test it.
Related
I'm working on an implement of gdb server. The server talks with gdb using RSP protocol and drive a CPU model. While developing the line step function (range step mode enabled) i found that my CPU model need to take some time to finish just one instruction step. The reason is the CPU model run in individual process. I have to pass the package (instruction step) through IPC. With thousands of instructions in the source line it spend too much time compared with normal running of that line.
Can I ask gdb using temporary breakpoints ( set on any possible instruction can go out of the range corresponding to the source line) to assist with step functionality? Does gdb really knows where to set the required breakpoints? If the answer is false, is there a good way to deal with this problem?
Thank you for your time!
Ted CH
You say that you are using range-stepping mode, which would imply your gdb server already supports the vCont packet, and the r action. See this page for all the details.
The r action gives a range start and end, and you are free to step until the program counter leaves the specified range.
Your server is free to implement this as you like, so, you could place a temporary breakpoint, then resume execution until the breakpoint is hit. But, you need to think about both control flow, and traps, both of which might cause a thread to leave the range without passing through the end point.
If passing each single step through to your simulator is slow, could you not just pass the vCont range information directly through to the simulator, and have the simulator stop as soon as the program counter leaves the range? Surely this would be quicker than having multiple round trips between simulator and server.
I have an embedded system with code that I'd like to benchmark. In this case, there's a single line I want to know the time spent on (it's the creation of a new object that kicks off the rest of our application).
I'm able to open Trace->Chart->Symbols and see the time taken for the region selected with my cursor, but this is cumbersome and not as accurate as I'd like. I've also found Perf->Function Runtime, but I'm benchmarking the assignment of a new object, not of any particular function call (new is called in multiple places, not just the line of interest).
Is there a way to view the real-world time taken on a line of code with Trace32? Going further than a single line: would there be a way to easily benchmark the time between two breakpoints?
The solution by codehearts, which uses the RunTime commands, is just fine if you don't have a real-time trace. It works with any Lauterbach tool and any target CPU.
However if you have a real-time trace (e.g. CPU with ETM and Lauterbach PowerTrace hardare), I recommend to use the command Trace.STATistic.AddressDURation <start-addr> <end-addr> instead. This command opens a window which shows the average time between two addresses. You get best results, if you execute the code between the two addresses several times.
If you are using an ARM Cortex CPU, which supports cycle-accurate timing information (usually all Cortex-A, Cortex-R and Cortex-M7) you can improve the accuracy of the result dramatically by using the setting ETM.TImeMode.CycleAccurate (together with ETM.CLOCK <core-frequency>).
If you are using a Lauterbach CombiProbe or uTrace (and you can't use the ETM.TImeMode.CycleAccurate) I recommend the setting Trace.PortFilter.ON. (By default the port-filter is set to PACK, which allows to record more data and program flow, but with a slightly worse timing accuracy.)
Opening the Misc->Runtime window shows you the total time taken since "laststart." By setting a breakpoint on the first line of your code block and another after the last line, you can see the time taken from the first breakpoint to the second under the "actual" column.
I am making a video game, which is a pretty small 2D shooter. Recently I noticed that the frame rate drops dramatically when there are about 9 bullets in the scene or more. My laptop can handle advanced 3D games and my game is very very simple so hardware should not be a problem.
So now I have a very big code (at least for one person) and I am pretty confused where I should look for? there are too many functions and classes related to bullets, and for example, I don't know how to analyze if the rendering function has problems or the update function? I could use MVS 2015 debugging tools for other programs, but for a game, it is not practical, for example, if I put a breakpoint before the render function, It should be checked 60 times in a second plus I can't input anything so I will never have bullets to test render function! I tried to use task manager, and I realized that CPU usage goes up really fast for each bullet, but when the game slows down only 10 percent of the CPU is used!
So my questions are:
How can I analyze functions when I can't use debugging tool?
And why game slows down while it still can use system resources?
To see what part consumes most of the processing power, you should use a function profiler. It doesn't "debug", but it creates a report when it's finished.
Valgrind is a good tool for that.
Why the game slows down? Depends on your implementation. I can create a program that divides two numbers and make it take 5 minutes to calculate the result.
We're in the video-game industry as well and we use a very simple tool on PC for CPU profiling: very sleepy.
http://www.codersnotes.com/sleepy/
It is simple, but really helped me out a lot of times. Just fire up the program from IDE and let very sleepy run for a few thousand samples and off you go!
When it comes to memory holes, Valgrind is a good tool, as already noted by The Quantum Physicist.
For timing, I would write my own small tracing/profiling tool (if my IDE does not already have one). Use a text debugging output to write short messages to a log file. Something like that:
void HandleBullet() {
printf("HandleBullet START: %i", GetSysTime());
// do your function stuff
printf("HandleBullet END: %i", GetSysTime()); // or calculate time of function directly
}
Write those debugging messages in all of the functions where you think they could take too long.
After some execution time, you can look into that file and see if something obvious happened (blocking somewhere).
If not, use a high level language of your choice to write a small parser for your created log file to tidy up and analyze your output. Calculate stuff like overall time spent in some function, or chart which functions took the longest. Should not be too difficult, if you stick to a log message style which is easy parsable for you.
For a project, I've created a c++ program that perform a greedy algorithm on a certain set of data. I have about 100 data set (stored in individual files). I've tested each one of these files manually and my program gave me a result each time.
Now, I wanted to "batch process" these 100 data set, because I may have more to do in a near future. So I've created another c++ application that basically loops and call my other program using the system( cmd ); command.
Now, that program works fine, but my other program, which was previously tested, now crashes during that "batch processing". Even weirder, it doesn't crash each time, or even with the same data set.
I've been at it for the past ~6 hours and I can't find what could be wrong. One of my friend suggested to me that maybe my computer calls (with system()) the other program too quickly and so it doesn't have time to free the proper memory space, but I find that hard to believe.
Thank!
EDIT:
I'm running on Windows 7 Professional using Visual Studio 2012
I have an application (didn't write it) that is producing APPCRASH dumps in C:\Windows\SysWOW64. The application while dumping is crippled, but operating at bare minimum capacity to not lose data. The issue is that these dumps are so large that the system is spending most of it's time writing these and the application is falling far behind in processing and will start losing data soon.
The plan is to either entirely disable it, or mount it to a RAM drive and purge them as soon as they hit the RAM drive.
Now I've looked into using this key:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb787181%28v=vs.85%29.aspx
But all it does is generate a second dump now instead of redirect the original.
The dump is named:
dump-2013_03_31-15_23_55_772.dmp
This is generally the realm of developers on Windows (with stuff like C/C++) so I'd like to hit them up, don't think ServerFault could get me any answers on this.
Additionally: It's not cycling dump files (they'll fill the 20GBs left on the hard drive), so I'm not sure if this is Windows behavior or custom code in the app (if it is... ick!).
To write a DumpFile, an app has to call the function "MiniDumpWriteDump" so this is not a behavior of the system or something you can control, it is application driven. If it dumps on crashes, it uses "SetUnhandledExceptionFilter" to set its own handling routine, before(!) the OS takes over. Unfortunately I didn't found a way to overwrite this handler from an other process, so the only hope left is, that there is a register entry for the app switching the behavior or change the path (as my applications have it for exactly the reason you describe).