Emulator development in Java
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 542 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/61420 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Java appletDecision theoryComputing platformStandard deviationEmulatorFormal languageComputing platformBenchmarkCapability Maturity ModelRead-only memoryFunction (mathematics)Limit (category theory)SoftwareChinese remainder theoremComputer hardwareMultiplicationOrientation (vector space)Extension (kinesiology)TouchscreenArchitectureSoftware frameworkComputer configurationUsabilityComputer-generated imagerySource codeSpectrum (functional analysis)Physical systemPlastikkarteEinsteckmodulImage resolutionVideoconferencingGame controllerData modelFile formatBlock (periodic table)InformationAttribute grammarSquare numberPixelCellular automatonFlash memoryDifferent (Kate Ryan album)Cross-platformInformation technology consultingVirtual machineProcess (computing)Multiplication signProjective planeImage resolutionPoint (geometry)DampingEndliche ModelltheorieState of matterPairwise comparisonView (database)Library (computing)VideoconferencingRaster graphicsEmulatorCartesian coordinate systemBitJava appletPixelPersonal digital assistantForm (programming)Flash memoryUniqueness quantificationInsertion lossSemiconductor memoryContent (media)Uniformer RaumGame controllerEvent horizonComputer hardwareSocial classSpacetimeTesselationMassSystem programmingTouchscreenSpektrum <Mathematik>Decision theoryFunctional (mathematics)BenchmarkSymbol tableSimilarity (geometry)Electronic visual displayGame theoryFloppy diskPlastikkarteSound effectSoftware developerSoftwareComputing platformFormal languageRevision controlFile formatGraph coloringCellular automatonSoftware bugPhysical systemSpectrum (functional analysis)BefehlsprozessorSet (mathematics)Slide ruleSource codeDiagramComputer animation
07:10
Physical systemVideoconferencingParameter (computer programming)Sheaf (mathematics)TouchscreenState of matterData bufferSpectrum (functional analysis)Similarity (geometry)Interrupt <Informatik>Partial derivativeVideoconferencingMereologySega Enterprises Ltd.TouchscreenSoftware developerPort scannerMathematical optimizationDifferent (Kate Ryan album)Position operatorLine (geometry)Series (mathematics)Mechanism designInterrupt <Informatik>Power (physics)Graph coloringArtistic renderingOnline chatComputer animation
08:07
Line (geometry)Position operatorGame theoryDifferent (Kate Ryan album)CASE <Informatik>Finite differenceAddress spaceTouchscreenMereologySemiconductor memoryComputer animation
08:37
Read-only memoryComponent-based software engineeringVirtual machineInterface (computing)outputExclusive orSet (mathematics)Carry (arithmetic)MultiplicationThread (computing)Computer hardwareSelf-organizationCore dumpBit rateBefehlsprozessorVideoconferencingLogicInterrupt <Informatik>Process (computing)Event horizonPhysical systemGraphics processing unitSample (statistics)Concurrency (computer science)Software frameworkDisintegrationCondition numberJava appletContext awarenessComputer-generated imageryOperations researchConvolutionLibrary (computing)Sound effectMathematical optimizationDifferent (Kate Ryan album)System programmingPixelQuery languageArtistic renderingData modelFile formatBuffer solutionForceFluid staticsVolumenvisualisierungInterior (topology)Raster graphicsExtreme programmingAddress spaceVideoconferencingBus (computing)Medical imagingSocial classSpektrum <Mathematik>System programmingEvent horizonThread (computing)Service (economics)Self-organizationPixelVideo cardCache (computing)Plug-in (computing)Inheritance (object-oriented programming)Interior (topology)Computer fileCodeArtistic renderingEmulatorLevel (video gaming)Bit rateComputer programMultiplication signBitForm (programming)Frame problemType theoryComputer configurationTouch typingObject (grammar)CoroutineCoprocessorPeripheralConnectivity (graph theory)1 (number)Standard deviationINTEGRALJava appletEinsteckmodulBefehlsprozessorSimilarity (geometry)Roundness (object)VolumenvisualisierungControl flowBuffer solutionGreatest elementGame controllerRevision controlSet (mathematics)Combinational logicInterpolationMereologyDecision theoryDebuggerTerm (mathematics)Position operatorComputer hardwareSemiconductor memorySound effectPoint (geometry)Row (database)AdditionImplementationRight angleLogicFeedbackFlow separationInterface (computing)Boss CorporationAutomatic differentiationExecution unitReading (process)Process (computing)Group actionInteractive televisionPerfect groupSpeech synthesisFreewareTask (computing)View (database)Hand fanDirection (geometry)Identity managementMassWeb pageMobile appCycle (graph theory)Musical ensembleGoodness of fitPanel painting
17:33
CodeSelf-organizationData structureTouchscreenParameter (computer programming)VideoconferencingComputer-generated imageryData bufferBoolean algebraRaster graphicsPhysical systemArray data structureOvalIntegerChinese remainder theoremSpectrum (functional analysis)VolumenvisualisierungArithmetic meanElectronic visual displayLine (geometry)Interior (topology)Clique-widthIdeal (ethics)DivisorScale (map)InterpolationSound effectRevision controlConvolutionMatrix (mathematics)File formatCoroutineDigital filterProxy serverSinguläres IntegralMotion blurScaling (geometry)Bilinear mapBefehlsprozessorOverhead (computing)Cubic graphPixelJava appletVirtual machineFunction (mathematics)Sample (statistics)Latent heatElectric generatorAbstractionInterface (computing)Game controlleroutputLogicSource codeDevice driverDefault (computer science)System callBlock (periodic table)WritingQueue (abstract data type)Event horizonService (economics)Task (computing)Scheduling (computing)Thread (computing)EmulatorView (database)Pointer (computer programming)Extension (kinesiology)Condition numberGreatest elementBuffer solutionFeasibility studyEndliche ModelltheorieSoftwareMIDIInformationInstance (computer science)Physical systemSpectrum (functional analysis)Device driverMultiplication signScaling (geometry)Control flowDifferent (Kate Ryan album)Revision controlElectric generatorLine (geometry)DivisorJava appletCodeFrame problemHeegaard splittingConcurrency (computer science)System programmingPoint (geometry)Term (mathematics)MereologyLibrary (computing)TesselationVolumenvisualisierungArtistic renderingBitNormal (geometry)Sound effectBookmark (World Wide Web)Medical imagingGame theorySampling (statistics)InfinityConvolutionLevel (video gaming)InterpolationLoop (music)Goodness of fitIntegerRaw image formatOptical disc driveFunction (mathematics)outputSource codeVirtual machineDirection (geometry)BefehlsprozessorService (economics)Characteristic polynomialMIDIEndliche ModelltheorieSoftwareBuffer solutionComputer configurationSheaf (mathematics)1 (number)Set (mathematics)File formatGame controllerCondition numberType theoryRaster graphicsPixelProcess (computing)Spektrum <Mathematik>Right angleForm (programming)FreewareProduct (business)MassVideo gameInternetworkingFamilyArchaeological field surveyHybrid computerVideo projectorObject (grammar)Thread (computing)Sinc functionPattern languageSystem callPort scannerData conversionTouch typingComputer animation
26:22
Concurrency (computer science)BefehlsprozessorSource codeSmoothingTouchscreenBit rateMeasurementPhysical systemThread (computing)Condition numberUtility softwareScheduling (computing)Task (computing)Broadcast programmingResource allocationOvalExecution unitAverageComputing platformSystem callError messageDivision (mathematics)Single-precision floating-point formatData bufferCompilerComputer configurationCore dumpAssembly languageFile formatFunction (mathematics)Maxima and minimaOrder (biology)Formal grammarPulse (signal processing)Computer-generated imageryEmulatorRepository (publishing)Spectrum (functional analysis)Software developerElectronic program guideClient (computing)Electronic program guideSoftware frameworkFrame problemComputing platformSign (mathematics)Multiplication signFormal languageWordType theoryEvent horizonStrategy gameCASE <Informatik>System softwareDemosceneSpektrum <Mathematik>MathematicsWeb 2.0Process (computing)Concurrency (computer science)Task (computing)Bit rateJava appletAssembly languageRevision controlWeightBitStandard deviationDifferent (Kate Ryan album)Right anglePoint (geometry)Sampling (statistics)Just-in-Time-CompilerPerpetual motionLevel (video gaming)Software testingBuffer solutionMathematical optimizationSource codeSoftware repositoryEmulatorSystem callComputer animation
31:47
Program flowchart
Transcript: English(auto-generated)
00:19
As the slide says, my name is Neil Coffey, I'm a Java developer. Of course I'm a Java
00:25
developer with that surname. So this is a talk about a little side project that I started a couple of years ago. I was just keen to see in Java how far I would get with developing an emulator. This is the first emulator that I've developed from scratch. And it kind of started,
00:47
you know, I had a bit of time, you know, we had a lockdown and I kind of thought, well what do I need to write an emulator? Well one of the things I might try and do to start with is get a ROM reader to kind of start from scratch. And then I found,
01:04
I don't know if you've heard, but my country left the EU a couple of years ago. And so I actually found it hard to source the ROM reader from Germany. So the first thing I did, physically work, is I built my own, obviously, if that was the first thing. And then, so by the time I'd done this, I was kind of committed at that point, okay?
01:23
So what I'm going to go through then is my experiences of writing an emulator and kind of, as I say, first one I've ever written, the decisions, challenges. It's going to be a little bit of a tour through some of the APIs that there are now in the Java platform for this kind of thing. And in all honesty, some kind of, there are some pros and cons that I'll talk
01:44
about, yeah? And above all, some kind of little, little tricks in the APIs that aren't always very well documented that can kind of help us. So why Java? So I'll be completely honest, the main reason for me was it's the language I'm most familiar with, yeah? So I've been
02:01
using Java now for about 20 years. In fact, the first JRE that I used came on floppy disk, okay? So that's how long. These days, it's obviously cross-platform, and these days it's got quite a rich set of APIs, hopefully everything we need to develop an emulator. It's got good longevity, so you tend not to have this thing in Java that
02:23
you sometimes get in Swift, for example, where you kind of come in one morning, try and recompile your code, and find it won't compile anymore because Apple's changed something. Java tends not to have that. It's maintained good backwards compatibility over the years, and so hopefully anything I write moving forward will also run. I don't
02:43
have to have an emulator in a few years' time to emulate the emulator, okay? There are, as well from personal view, there's some APIs coming up that I was kind of keen to have a benchmark to see, well, in a couple of years' time, things like the foreign function and memory API that's kind of just about to kind of hit stability, I was
03:04
kind of interested to see, well, what will I be able to do with that when it comes out? Okay, so I set myself some goals that I wanted my emulator to be accurate enough to allow most software to run on. In all honesty, for kind of version one of my first
03:22
emulator, there were some things that I decided not to emulate, things like memory contention issues. There are some weird things that you can get that I'll maybe have time to talk about on the spectrum with kind of glitches in the video display. So essentially, my kind of overall goal was anything that software uses that isn't
03:46
a kind of bug in the hardware that people might accidentally get around or use, I'll try and emulate that. As I've already mentioned, one thing I was trying to do is get a baseline from the basic Java APIs and try not to bring in additional libraries
04:03
as a kind of starting point. I wanted to be a cooperative application as well, not necessarily just full screen. Performantly enough, yeah, as I say, I'm not trying to write a 1 GHz Z80 for my kind of first project. Which machines did I try to emulate? So I went for the trusty old ZX Spectrum, so apologies to Steve, I'm adding to the
04:23
pile of emulators now available for the ZX Spectrum. And I also thought that the Sega Master System, so why these two together? A, these are the machines I had as a kid, okay? But B, if we look at the technical specs, there are actually some similarities
04:41
that are going to help us. So you can see the video resolution is similar, although the video chips and formats that they use are very different. The CPU essentially is Z80 at around 3.5, I say around 3.5 actually, there are different models of the spectrum with different speeds, and the Master System, I think it was 3.58 for the Master System.
05:05
And you can see then here, probably everybody in this room is kind of fairly familiar with these machines, but for those who aren't, so you can see that the Sinclair Spectrum in comparison was all about saving money. So you had one custom ULA here that was handling the video and the sound, and was also the memory controller, compared to the
05:26
Master System that had a bit more onboard hardware that we're going to have to try and emulate. So just a little bit more detail of some of the difficulties again for people who may be familiar. So the ZX Spectrum, it renders its video all from RAM, essentially
05:46
with kind of no acceleration as such. And it's got this format that really kind of gives the ZX Spectrum its look and feel. So you have essentially a one bit per pixel bitmap, and then over the top of that, you're allowed two colours essentially
06:06
per 8x8 cell. This kind of gave the Spectrum a bit of a unique look and feel, as bright and flash as well, per cell. Compare that to the Master System, where you've got an
06:21
actual dedicated graphics chip. But this was all tile-based. So you have a 34x24 tile display. Each tile can be 8x8 pixels. So the eagle-eyed amongst you will notice that you can't actually define enough unique tiles to give each pixel in that display a
06:47
kind of unique pixel. So anything that looks like it does, you'll see you get these kind of, almost like little manga cards for some games. Or here, where we've tried
07:01
to fill the screen, but obviously secretly around the edges, we've actually got blank space. There wasn't actually enough memory to have unique tiles for every space on the screen. But despite that, it did have features that were actively kind of advocated by Sega
07:25
to its developers to make the most possible use of the video chip. So the way it worked, you have a series of registers to control things like the scrolling, the colours, and there was a mechanism via interrupt, so actually on each scan line, or on every nth scan line
07:45
depending on how you programmed it, you could actually change those registers. Yes, you could change the scroll position at different parts of the screen, you could switch off the screen, you could potentially change the colour palette. And so that's something when we're doing our video rendering, we're going to have to have a little
08:02
think about how we can kind of optimise that a little bit. I'll just give a very quick example. So we're going to see here we've got some parallax scrolling, where you see how on different scan lines we're setting a different X position. And then, that's quite a nice fact, that's a game called Choplifter. On the next example, we're actually going to have a
08:24
case where here, we're actually, it's not literally turning off the screen, but it's changing the base address of the screen memory to effectively turn it off at that bottom part. And this is probably the most, an extreme example here, where literally on every other scan
08:44
line, we're changing the scroll position to kind of give that effect there. So, very briefly, I'll just give a little bit of the overall organisation of the emulators, kind of the first thing you really need to think about. So it's how we kind of turn this, this is very high
09:01
level obviously, but this is essentially what the hardware looks like. We've got an address bus at the top with the ROM and the RAM connector, we've got a data bus at the bottom with any peripherals, which on the spectrum were fairly minimal, there was a one to eight version with the sound chip. And then on the master system, you can see again, similar
09:20
idea, but notice that the ROM essentially is the cartridge that you plug in. Yes, when you plug a cartridge in, you're kind of directly communicating with the Z80 and any logic for things like memory paging, you can have that on the cartridge. And then a few more peripherals going on in the data bus, we've got the video processor there,
09:41
the programmable sound generator, there's also an FM unit, which I'll touch on briefly, and the controllers. So then what I try to do, I'm sorry, there was the emulator clock there as well, and what I try to do is to abstract that down, so that I'm going to organize the program this way. We've obviously got the Z80 implementation, which is obviously a
10:01
fairly fundamental part. But then what I've actually done is in my implementation, I've separated out the Z80 decoder from the actual instruction loop. This is quite nice and we want to add a debugger
10:21
as well, then you can go through the same code to decode the instructions for the debugger. And then we've got an abstract IOBUS from which again then on the master system, we'll have our master system IOBUS on the spectrum IOBUS et cetera, a memory of similar ideas, so we have
10:41
subclasses of these overall base classes. And the clock, which is actually working the other way round to the way the hardware works, the clock is effectively going to be a kind of break on the CPU thread and is going to tell it when to pause to keep things at the right rate of instructions. And
11:00
there'll be a little bit of feedback as well between the video thread so that it can interact with the CPU to do the things I've just mentioned about accurately timing the scroll register and things. So just an example, I end up with interfaces like this and then to the Z80, it's effectively, it doesn't care whether it's
11:25
a master system or a spectrum it's communicating with, it just goes through these abstract interfaces like this. A little bit of detail I've just mentioned about the CPU. The implementation that I went for, which isn't necessarily the most popular of the traditional
11:44
emulators, I tried to really break down the instruction set into more of an object oriented form. So I've got instruction types you'll see there, and then for each type the individual instruction is kind of returned as an object that says well it's this type and it's from this source, this destination. So I've tried to
12:04
not have to write 900 different routines for all the various combinations that the Z80 had. And that gives quite nice code there's a little bit of a performance trade off obviously, but it turns out not to be too bad. And then the other decision I made was
12:24
well we're now writing in Java in 2023 now. So I decide well I want to make the most of multi-threading. So the various of the components I've just mentioned will actually sit in their own thread. That's kind of nice organizationally and
12:44
also in terms of monitoring the performance of the app, it means it can break down a little bit more easily what resources are being used for each component. So just to give a little bit of an overview of this, so we'll have at the top we've got our CPU
13:07
thread at the top there, which is going to be interacting with the clock and it's periodically going to say I've done this many instruction cycles, how am I doing, do I need to pause to kind of maintain the
13:24
correct instruction rate. Then we've got the video controller which is going to be periodically sending V blank instructions every frame to the CPU to notify it. We've got then also a separate rendering thread
13:44
which is going to do any of the kind of heavy lifting rendering that we need to do. So anything like scaling, calculating what the actual pixels are. And then the idea is that here in the event dispatch thread, which is a single threaded, at that point we have to kind of have our ducks in a row and
14:04
to know what we're actually going to render. And then additional complication is there's going to be an audio service in its own thread as well, so a bit of choreography going here. So the different APIs that we're going to use, there's a standard Java swing API, so there's no additional
14:24
OpenGL plugins here. A couple of the Java sounds I mentioned, the monitoring Neo, kind of a little hidden one, but when we're writing data, when we're emulating kind of cartridge saves and we want to write data, actually open a mapped file for that to save the data, and threading is often important.
14:44
I'm not going to really mention too much, but there are also desktop and taskbar integration APIs that help with integrating into desktop and system menus and things. OK, so we'll start with the graphics. The standard swing Java 2D API, for those of you who may be familiar with, the idea is that you override the
15:04
JComponent class and you implement a paint component method. And in here, in principle, we can set various options to hint with whether we want quality, speed, etc. And then finally, we can render an image and it will be rendered with these
15:24
different settings. But some caveats with that. Unfortunately, it turns out that some of those options effectively end up turning off GPU acceleration, and they can be quite CPU hungry and efficient. It's not clearly documented which ones actually
15:45
run on the CPU and the GPU, but effectively ends up that the fast options without any quality interpolation are the ones that just go straight to the GPU.
16:01
So we're going to have to be a little bit careful not to use too much CPU time for each frame render. And then there's also an additional problem that the standard API to set and get pixels from buffered images, actually it's quite inefficient for setting
16:20
individual pixels, but we have a workaround. So this would be the standard API that we'd use. We create our image like this, lovely, we set different types, about 15 different types that we could use, and then we can set RGB and whether that backing store is an int per pixel or bytes per pixel or whatever, it will work out
16:40
how to set the RGB, lovely. But in practice, we're probably never going to have anything other than an int per pixel. So this is the least efficient way we could possibly imagine to set the pixel data. Luckily, we can actually, with a little bit of jiggery-pokery, we can ask Java2D for the underlying int array, and then
17:05
we can just directly write to that. The advantage being then things like array fill, array copy, array.fill, sorry, they then become available. Yeah. There's a caveat that normally you wouldn't do this because if you've got static images that you're rendering lots of times, what would normally happen is that
17:24
Java2D sends that to the GP you want, then subsequent renders are effectively free. But we don't really need that for our purposes, we're going to be rendering a different image on each frame effectively. So that's not such a problem for us. So then, just to come back to what I was showing you earlier with the different
17:44
scroll per, on different raster lines, we kind of want to get the best of both worlds with how we then end up structuring things. So what I do is I basically I kind of break down the image and say, well, for this frame
18:04
where are the points where the things like the scroll registers actually change? On some games, they will just have one setting per frame, and I can then just efficiently render the entire frame without having to worry about clips per section,
18:25
etc. So I don't kind of literally go through pixel by pixel kind of chasing the beam. So just a kind of brief example here. So I'll split into sections and then I can say for that section, get me the relevant settings and then go through and fetch from the tile map
18:44
data and render it kind of almost as you expect. So by doing that and by using this trick of getting the raw kind of interray, this does allow us to get quite a good speed up on the rendering. So if there's kind of one
19:03
thing you're doing in Java, the one kind of speed up to think about is probably this. So having known about that trick, there are some little tricks that we can do. Obviously people are familiar with CRTs where actually the way these systems
19:23
work, they kind of render every other scan line. And if you've got a really good quality monitor like that, most people's monitors are a bit more that, you kind of have bleed in between the scan lines and you also kind of get ghosting effects, this kind of thing. So we can try and get a little bit of that look and
19:43
feel. So I'm literally going to do here in the Java is I'm going to render every other kind of scan line. I'm going to render the kind of darkened version of that scan line so I can kind of produce something like this. Then just have to be a little bit careful with the scaling because you can kind of get moiré
20:02
effects if you've got a kind of odd scale factor. So do a little bit of extra interpolation to try and get around that. Then another effect that we can do in Java is to let these kind of ghosting effects. If we can define our effect in terms of a convolution
20:22
matrix, which you may have seen, then we get native library built in that will allow us to render that efficiently and that will also access the integer data under the hood. It won't go through that set RGB every time. So we can get effects like this.
20:41
Again, we're kind of low rendering time. And then this is for my favourite Spectrum games from a child. So something like this combining the kind of CRT effect. Another issue we just have is there are multiple ways to scale images in Java and
21:01
depending on which one we pick, we kind of get different performance characteristics. So the thing I'm actually looking at, which is kind of most stable, is to actually just hard code the scaling myself because then I can go through this, access the inter-ray directly.
21:22
Some of these other built-in APIs, unfortunately, they go through that get RGB, set RGB to support different formats, but we don't So the Master System and the Spectrum had quite
21:43
different ways of producing sound. The Spectrum obviously was this kind of very simple speaker. It could effectively be a 1 or a 0 and you kind of control a square wave literally from the CPU to produce your sound. But then something like the Master System that had an actual sound chip, you would control the sound by setting register
22:03
to say I want tone 1 to be this frequency, etc. So we want to abstract those two ways of producing sound so that we can just have one generate sample data method and then our audio service is going to call into that. And so it's just a brief slip here
22:23
of what I do. So I've got the subclass, for example, for the Spectrum type sound there. And then here, a bit more complicated, but we effectively do a similar thing. Whenever we're asked for some sample data, we're going to calculate that sample data and split it back. And then
22:43
the question becomes, well, given that sample data production, how do we actually pipe it down to the audio output? Java has this slightly quirky model where you have a notional mixer that's got inputs and outputs. And the slightly perverse
23:03
thing is that everything is seen in terms of this notional mixer. So when you want to output sound, you're actually sending it to an input of the mixer. So we call it a source line. Whereas to us,
23:20
it's not really a source, it's a target, but that's the reason for that. So if I, you see here, they're also tied to particular drivers and I can enumerate the different drivers on my machine. I found out, for example, that my Mac can listen through my iPhone microphone. That was the first time I
23:40
found that out. So we query the available mixers, and then we query them for their available source lines. And then we can write the data to the source line. We open it with a format that we want, we write the data. And so this is now where I can
24:00
call my generate sample data method. When there's some frames to send, I send them. People might have spotted a slight flaw with that. I've got a nice infinite loop there. Something like the spectrum, I need to be able to tell the difference between there's no audio and there's no audio yet,
24:20
but there's some on the way, and I don't want to sit in an infinite loop in the meantime. So this is where, so yeah, this was just code examples of how I get, we output those ones and zeros and then we translate them. So I'll just skip quickly. So we get those ones and zeros,
24:41
and then what we're actually going to do is we're going to use a condition object, which is part of the Java Concurrency API, so that we can basically, in our audio service thread, we can wait for a send. Okay, there we go. Okay.
25:06
There's also a little bit that we can do with, yeah, hybrid buffering is basically where we want to ideally have a small buffer to fill to send. But that then introduces the problem of we might,
25:22
we run the risk that if we can't fill our buffer in time, we end up with choppy audio. And so in practice, what we can actually do is have a larger buffer and detect when it's half full and kind of keep topping it up. So that's basically how I do it. Okay. The FM synth, which I mentioned briefly, I never had one of these. I think they're quite rare, you can only get them in Japan.
25:41
But the master system, this was an option for the master system. Okay. And what I actually do for this, I cheat slightly. I use Java's built-in MIDI software synthesizer. So I translate the instructions to that FM synth into MIDI commands and I send these
26:01
to the soft synth. And I don't know if this is going to play on the projector, but I'll turn up the audio here and just see. So you'll hear the difference, you'll hear the normal PSG sound chip and then you will hear the FM kind of synth. Oh, I don't think I can hear that. It's probably too quiet.
26:33
And you see there, we can then start playing about with things like the voices that we assign to those. Okay. So I'll just touch on very
26:42
briefly, because time is getting to the end. So I'll just touch very briefly on the timing and concurrency. So the CP, obviously, we need to maintain it at a kind of desired instruction rate. So the way I do this is I introduce pauses, but then we want
27:02
to be able to accurately measure those pauses. We also need to accurately measure the timings between the frames that we're sending. And there are standard APIs in Java to do this. A little issue that I did come across, the standard executor framework that we'd normally use
27:22
for doing this. So here we say, right, okay, I want a frame every 60th of a second. Depending on your platform, you can actually, in practice, get quite erratic intervals between the events. So you can see, in particular on macOS, I find, you could get this kind of
27:42
20% error. So this is just one experiment, for example. What I luckily found was that if we request low sleep interval, the accuracy is actually better for low sleep intervals than the higher sleep intervals, and
28:02
it seems to max out a particular amount. I'm not exactly sure of the underlying reason for that, I must admit, in Darwin. But then what this leads to is we can kind of come, depending on the platform, we can come up with a different strategy for maintaining accurate timing. And a challenge, a perpetual challenge with Java really is
28:22
that the best strategy will depend on the platform. Very briefly, data manipulation, which is sometimes something a bit scared of. In Java, all of the types, well, they're generally signed, char is unsigned, but they're
28:43
generally fixed width and signed. We can't do what we can in C and other languages in defining our own types. So one way to work around this when we want to do things like register access, send the audio data, is the byte buffer is generally the easiest way to do that. You'll notice when we want bytes, because
29:03
the byte type is signed, so if we want an unsigned byte, then we would normally promote it to an int, and then we can basically undo the FF and lop off the lowest bytes. So there's just a question with that about
29:23
well how do we check that the JIT compiler is doing what we need to do? So I'll just step forward slightly, and what we can actually do, we can ask it. So we can ask it to dump out the JIT compiled assembler, and then we can check if some of those optimisations are actually going in.
29:43
So this is a very simple test I set up, it's basically, it's iterating through repeatedly, effectively writing a word and then reading it from a byte buffer. This is obviously a slightly contrived, this is really the contrived corner case example, but it kind of illustrates the kind of thing that's possible. So I'm effectively, that BTF,
30:03
I'm effectively writing a two byte unsigned value into there via a byte buffer, so it looks like I'm creating a byte buffer, setting values on it, calling a method on it. But by the time we get down to the actual JIT compiled assembly code, in the best case we're actually not, that just compiles down into
30:23
we are storing a half word in there. And so that's the kind of thing that we can do to check for those things. Those method calls are completely optimised out. So there you go, so in conclusion, using those various APIs together,
30:43
we can write an emulation in Java, a few pros and cons, caveats around the different platform behaviour, a few things that still to add in here, this is very much kind of version one. However, it was at the point where it will actually run quite a lot of the
31:03
spectrum of system software. If anyone's curious, I've got an initial release there on GitHub, there's going to be source code and further improvements on the way, so watch that repo as they say. A few references there that people may or may not have come across, this book here by Chris Smith is I think kind of a remarkable piece
31:23
of work about the kind of the very kind of low level details of how the spectrum works and the usual kind of reference guides that over the years have surfaced on the web. And so with that, I think I'll hand back.